Brain-Computer Interface Ergonomic Controllers

#### **Chapter 8**

Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System Using Wavelet Features and Various Machine Learning Methods

*Ebru Sayilgan, Yilmaz Kemal Yuce and Yalcin Isler*

### **Abstract**

Steady-state visual evoked potentials (SSVEPs) have been designated to be appropriate and are in use in many areas such as clinical neuroscience, cognitive science, and engineering. SSVEPs have become popular recently, due to their advantages including high bit rate, simple system structure and short training time. To design SSVEP-based BCI system, signal processing methods appropriate to the signal structure should be applied. One of the most appropriate signal processing methods of these non-stationary signals is the Wavelet Transform. In this study, we investigated both the effect of choosing a mother wavelet function and the most successful combination of classifier algorithm, wavelet features, and frequency pairs assigned to BCI commands. SSVEP signals that were recorded at seven different stimulus frequencies (6–6.5 – 7 – 7.5 – 8.2 – 9.3 – 10 Hz) were used in this study. A total of 115 features were extracted from time, frequency, and time-frequency domains. These features were classified by a total of seven different classification processes. Classification evaluation was presented with the 5-fold cross-validation method and accuracy values. According to the results, (I) the most successful wavelet function was Haar wavelet, (II) the most successful classifier was Ensemble Learning, (III) using the feature vector consisting of energy, entropy, and variance features yielded higher accuracy than using one of these features alone, and (IV) the highest performances were obtained in the frequency pairs with "6–10", "6.5–10", "7–10", and "7.5–10" Hz.

**Keywords:** steady-state visually-evoked potentials (SSVEP), brain-computer interfaces (BCI), wavelet transform (WT), mother wavelet selection, pattern recognition, machine learning (ML)

#### **1. Introduction**

Electroencephalogram (EEG) signals are one of the most widely used types of biomedical signals for Brain-Computer Interfaces (BCIs), owing to their portability, high time resolution, ease of acquisition, and cost-effectiveness as compared to other brain activity monitoring techniques [1–3]. There are four typical EEG-based BCI paradigms: steady-state visual-evoked potentials (SSVEP), slow cortical potentials (SCP), the P300 component of evoked potentials, and sensory-motor rhythms (SMR) [4–6].

The SSVEP signal is a periodic response to a visual stimulator modulated at a frequency greater than 6 Hz [7] or 4 Hz [8]. The amplitude and phase characteristics of the SSVEP depend on stimulus intensity and frequency. SSVEP events can be repeatedly produced if the stimuli are provided under controlled conditions [9]. For instance, staring at a flickering light that flashes at a constant frequency stimulates the human visual pathway. The flickering frequency is radiated throughout the brain. This stimulation produces electrical signals in the brain at the base frequency of the flashing light, as well as at its harmonics [10]. Practically, there is a marked reduction in the power of the SSVEP signals from the second harmonics onwards. This has been attributed to the low signal-to-noise ratio of the SSVEP signals at high frequencies and can be accounted for the brain dynamics that act as a low pass filter [11].

The analysis of EEG signals using machine learning (ML) methods is developed to help physicians in accurate diagnosis and provides fast and valid tools in assistive applications designed for individuals. Among the various approaches available in the literature, the Wavelet Transform (WT) has proven to be an effective timefrequency analysis tool for analyzing transient signals [12, 13]. Various wavelet families are available to define and adapt signal characteristics [14]. However, choosing an appropriate mother wavelet is very important for the analysis of these signals. Research studies to date for EEG-signal classification using the wavelet technique have mostly been done using the Daubechies (Db) family. The maximum accuracy achieved in this study was 95.00% [15]. However, in this study, although the signal was suitable for Discrete Wavelet Transformation (DWT), analysis was performed using the Continuous Wavelet Transformation (CWT) method. Furthermore, in the same study, analysis was performed for a single frequency. In this chapter, a detailed analysis was performed using multiple frequencies. Also, in Ref. [16], the SSVEP signal was used for a single wavelet type (Db40), but no mother wavelet selection was made. Thus, the mother wavelet selection for SSVEP is still an unanswered question.

The research presented in this chapter is especially about selecting the most suitable wavelet function for signal analysis of SSVEP signals, detailed investigation of energy, entropy, and variance attributes, and examining the appropriate frequency(s) for SSVEP based BCI design.

There is not any, to our knowledge, in-depth study on the selection of stimulation frequencies. It was noticed that higher accuracy rates could be obtained for pattern recognition by examining the frequency selection and the differences between the frequencies. The frequency or frequencies that might result in higher higher accuracy rates and time advantages are considered to help design userfriendly BCI systems. Due to the shortcomings in the literature mentioned above, this study was considered to be conducted.

#### **2. Materials and methods**

#### **2.1 Data recording process and users**

In this study, the dataset (AVI SSVEP Dataset) containing SSVEP signals designed and recorded by Adnan Vilic was used [17]. The data set contains data that include EEG measurements of healthy individuals (three men and one woman

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

having ages range from 27 to 32) looking at the flickering target to trigger responses of SSVEP signals at different frequencies, and the data set used for this study is publicly available. Using the standard international 10–20 system for electrode placement, the reference electrode is positioned in Fz with the signal electrode in Oz and Fpz in the ground electrode. In this experiment, individuals had been seated 60 cm away from a monitor staring at a single flashing target whose color changed rapidly from black to white. The test stimulus was a flashing box at seven different frequencies (6–6.5 - 7 - 7.5 - 8.2 - 9.3 - 10 Hz) presented on the monitor. The data set comprises of four sessions with four different participants. Each trial in a session lasts 30 seconds and participants take a short break between trials. Experiments were repeated at least three times for each frequency.

In **Figure 1**, a) the raw signal stimulated at a frequency of 10 Hz and b) the power spectrum density computed signal (with its 1st and 2nd harmonics) are shown.

#### **2.2 Feature extraction**

It is possible to define the neurophysiology of the human visual system, the neuronal activity of the visual cortex is replaced by visual stimulation, and variations of the brain response related to the features of the visual stimulus such as brightness, contrast and frequency [18]. Neurons in the visual cortex synchronize their flickering to the frequency of blinking of the visual stimulus. SSVEP signals are generated when visual stimuli are repeatedly presented, creating almost sinusoidal oscillations [19]. Applying a visual stimulus flashing at a constant frequency increases the energy of brain activities comparing to the case of applying a constant visual stimulus [7]. The strongest response occurs in the visual cortex of brain (occipital), but other areas of brain are also activated to different degrees [8, 9]. SSVEP marks can be detected even for narrow frequency bands around the visual stimulation frequency with signal processing methods that take advantage of the specific features of the signal such as timing, frequency, and rhythm [20]. For this reason, this study is designed on accepted signal processing strategies that validate the comprehensive scenarios analyzed.

#### *2.2.1 Time-domain based feature extraction*

The SSVEP time-domain features are extracted from available literature information in the original field of the EEG signal. **Table 1** describes the relevant and distinctive SSVEP time-domain features we identified. These features are based on the amplitude (e.g. average amplitude change value, root mean square, interquartile


**Table 1.**

*EEG time-domain features (EEG signal is represented by x, and* Fð Þ<sup>t</sup> <sup>i</sup> *stands for the EEG features computed from x).*

ranges, etc.) and statistical changes of the EEG signal (e.g., mean, variance, skewness, and kurtosis, etc.) [20].

#### *2.2.2 Frequency-domain based feature extraction*

SSVEP signals are identified by oscillations with frequencies synchronized with the stimulus frequency [6, 21]. For this reason, many SSVEP-based BCI systems use frequency information embedded in the signal in the feature extraction process [22, 23]. Within the scope of this chapter, SSVEP frequency features were extracted from the frequency domain representation of the SSVEP signal using a Fourier transform. The relevant and distinctive SSVEP frequency characteristics we detected are based on the spectral information of SSVEP signals for each EEG rhythm, such as energy, variance and spectral entropy.

These features explain how power, variance, and irregularity (entropy) change in certain related frequency bands. In practice, this implies that these features will use their power in certain frequency bands [24].

Features based on power spectrum, energy of each frequency band,

$$F\_1^{(f)} = Energy\_f = \sum\_{k=1}^{M} \mathcal{y}(k)^2 \tag{1}$$

Here is the Fourier transform of the analytic signal y of a real discrete time EEG signal x.*F*ð Þ*<sup>f</sup>* <sup>1</sup> ¼ *Energy <sup>f</sup>* stands for the EEG features computed from y, and M corresponds to the maximum frequency.

Features based on variance of each EEG frequency band,

$$F\_2^{(f)} = Variance\_f = \frac{1}{M - 1} \sum\_{k=1}^{M} \left( y\_k - \overline{y} \right)^2 \tag{2}$$

"*y*" in the formula stands for the average of the "y" signal.

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

Feature based on entropy of each EEG frequency band: Spectral entropy measures the regularity of the power spectrum of EEG signal,

$$F\_3^{(f)} = Entropy\_f = \frac{1}{\log(M)} \sum\_{k=1}^{M} P(y(k)) \log P(y(k)) \tag{3}$$

#### *2.2.3 Wavelet transform based feature extraction*

#### *2.2.3.1 Wavelet decomposition*

SSVEP signal is non-stationary [18]. Consequently, WT has been used to examine not only spectral analysis of the signal but also the spectral behavior of the signal over time. This method is characterized by a smooth and fast oscillating function that is well localized in frequency and time [12]. WT can be applied as a specially designed dual Finite-Impulse Response (FIR) filter. The frequency responses of FIR filters separate the high frequency and low-frequency components of the input signal. The point of dividing the signal frequency is usually between 0 Hz and half the data sampling rate (Nyquist frequency). In the Multi-resolution Algorithm (MRA) of the WT, the identical wavelet coefficients are used in both low-pass (LP) and high-pass (HP) filters. The LP filter coefficients are associated with scaling parameter, which will determine the oscillatory frequency and the length of the wavelet. At the same time, the HP filter is associated with the wavelet function. The outputs of the LP filters are called the approximation *(a)* coefficients, and the outputs of the HP filters are called the detail *(d)* coefficients. In MRA of WT, any time-series signals can be entirely decomposed in terms of *a* and *d* coefficients based on decomposition level. Implementation of DWT on raw signal produces an MRA of various statistical and non-statistical parameters across time and frequency [24]. The subsets of the wavelet coefficients of the decomposition tree were selected as input vectors to the classifier. The SSVEP signals are decomposed into 9 decomposition levels, and *i = 1, 2,*. *.*., *9* for 512 Hz sampling frequency.

#### *2.2.3.2 Parameters for feature extraction*

Using different DWT functions (Haar, Db2, Sym4, Coif1, Bior3.5, Rbior2.8), SSVEP signals are subdivided into frequency bands (delta, theta, alpha, beta, gamma), and the energy, entropy and variance were calculated for each band [13, 14]. Every DWT frequency band is associated with one or two EEG rhythms. Thus, a number of features represented in the frequency bands were obtained.

Energy at each decomposition level was calculated using the following Equations [24]:

$$F\_1^{(w)} = Ed\_i = \sum\_{j=1}^{N} \left| d\_{\vec{\eta}} \right|^2, i = 1, 2, 3, \dots, l \tag{4}$$

$$F\_1^{(w)} = Ea\_i = \sum\_{j=1}^{N} \left| a\_{ij} \right|^2, i = 1, 2, 3, \dots, l \tag{5}$$

where *dij* and *a*ij represent detail and approximate coefficients, respectively, formed by the wavelet level corresponding to each EEG band (delta, theta, alpha, beta, gamma).*i* ¼ 1, 2, 3, … , *l* is the wavelet decomposition level from levels 1 to *l*. Finally, N stands for the number of detail and approximate coefficients at each decomposition level*.*

Another feature, the entropy at each decomposition level is calculated using the following Equation [25]:

$$F\_2^{(w)} = Et\_i = -\sum\_{j=1}^{N} d\_{ij}^{\;} \, ^2 \log \left( d\_{ij}^{\;} \right), i = 1, 2, 3, \dots, l \tag{6}$$

The variance at each decomposition level was calculated using the following Equation [24]:

$$F\_3^{(w)} = Var\_i = \frac{1}{N - 1} \sum\_{j=1}^{N} \left( d\_{\vec{\eta}} - \mu\_i \right)^2, \mu\_i = \frac{1}{N} \sum\_{j=1}^{N} d\_{\vec{\eta}}, i = 1, 2, 3, \dots, l \tag{7}$$

Extracted features, which consist of different combinations, (*l* +1) dimensional are used as input vectors. In other words, for an '*l*' level decomposition, the feature vector of any parameter can be represented as Feature = [*xd*1, *xd*2, … , *xdl*, *xal*], where *x* stands on energy, entropy, and variance.

#### **2.3 Machine learning classification algorithms**

The most important use of machine learning (ML) methods is classification [26]. After feature extraction, classification is performed to recognize an SSVEP signal and convert it to command, that is, to use it as output [27]. For the classification process, the "datasets" formed by a certain number of feature vectors, of which class it belongs, are passed through the training period required by the classification type. As a result of this training, a decision mechanism algorithm is created, which is used to assign the unknown signal to the appropriate class [28, 29].

The extracted feature vectors have been tested with seven well-known and commonly-used basic classifiers. These selected classifier algorithms are *Decision Trees, Discriminant Analysis, Logistic Regression, Naive Bayes, Support Vector Machines, k-Nearest Neighbors, and Ensemble Learner.* The classifier performances were examined to determine which combination of mother wavelet function, wavelet features, and classifier algorithm gives the highest accuracy.

#### **2.4 Evaluation of machine learning algorithms performance**

While training ML algorithm to classify SSVEP signals is an important step, it is essential to consider how the algorithm is generalized on unprecedented data (test set) [30]. We need to know if the algorithm works correctly and whether we can trust its predictions. The machine learning algorithm can only memorize the training set. Therefore, it can make reasonable predictions about future examples or examples that it has not seen before. Thus, it is one of the essential steps for BCI systems to know and apply the techniques used to evaluate how well a ML model generalizes to new, unprecedented data [31, 32]. For this goal the "k-fold crossvalidation" and "confusion matrix" evaluation criteria were used to evaluate the performance of the ML algorithms used in this study.

#### *2.4.1 k-fold cross-validation*

In this method, the data set is randomly divided into k segments. Among these segments, k-1 parts are used for the training, and the remaining part is used for the testing. This process is repeated until all parts are used for testing separately. The

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

test errors are recorded each time, and the average of the errors after the last part is reported. The performance of each classifier algorithm used is measured by carrying out this approach [30, 31]. In this study, the data set is divided into five equal parts.

#### *2.4.2 Confusion matrix*

Confusion matrix is, at first, calculated to evaluate the classifier performance. The confusion matrix is generated by comparing the responses of the classification algorithm to the test set with the actual values in the data set. In case of two-state problems, it is a table consisting of four different situations [26]. These are True Positive (TP) value, True Negative (TN) value, False Positive (FP) value, and False Negative (FN) value.

Accuracy value (ACC) is calculated as classifier performance based on these values [27]:

$$ACC = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{FN} + \text{FP} + \text{TN}} \tag{8}$$

#### **2.5 Experimental design and implementation details**

In accordance with the objective of our study, we have designed it in a two-fold manner for time-frequency domain features. First, we measured the accuracy of each (feature, mother wavelet function) pair. As the second part, we combined the set of three features with each mother wavelet function in order to discover which mother wavelet function yields the best performance in terms of accuracy. Three important features (i.e. energy, variance, and entropy) have been extracted for EEG bands (i.e. delta, theta, alpha, beta, and gamma) using six different mother wavelet families (Haar, db, sym, coif, bior, rbio). To this purpose, algorithms were implemented using Signal Processing Toolbox and Wavelet Toolbox in Matlab 2019a. All the classifiers and performance analyses were implemented using the Classifier Learner App tool from Matlab version 2019a.

#### **3. Results and discussion**

Characterized as an increase in the amplitude of the stimulating frequency, the photic driver response results in significant baseline and harmonics [33]. Thus, it is possible to determine the stimulus frequency based on the SSVEP measurement. For this purpose, 115 feature vectors were extracted from the SSVEP signals recorded using seven different frequencies. The extracted feature vectors were run with seven basic ML algorithms. Simultaneously, the frequencies that constitute the SSVEP data set were evaluated with multiple, selected three-class, and binary classifications. Also, the effect of the increase in the difference between frequencies on the accuracy criterion was investigated, and the results are shown in detail between **Figures 2**–**17**, and **Tables 2**–**5**.

#### **3.1 Time-domain features results**

The multiple and binary classification results of 25 feature vectors extracted from SSVEP signals using time-domain properties are given below, respectively.

**Figure 2.** *Binary classification performance of the time-domain features.*

**Figure 3.**

*Percentage of classifier where the best result is the most often obtained as a result of running the algorithms 2,520 times in total.*

**Figure 4.** *Results of selected 3-class classifications for frequency-domain features.*

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

#### **Figure 8.**

*Classification performance of energy, entropy, and variance together as a feature set (all features together).*

#### **Figure 9.**

*Percentage of classifier where the best result is the most often obtained as a result of running the algorithms 2,520 times in total a) energy, entropy, and variance as separate features, b) energy, entropy, and variance as a feature set.*

#### **Figure 10.**

*Binary classification performance of the features for bior 3.5 mother wavelet function.*

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

**Figure 11.** *Binary classification performance of the features for coif 1 mother wavelet function.*

**Figure 12.** *Binary classification performance of the features for Db 4 mother wavelet function.*

**Figure 13.**

*Binary classification performance of the features for Haar mother wavelet function.*

**Figure 14.**

*Binary classification performance of the features for Rbio 2.8 mother wavelet function.*

**Figure 15.** *Binary classification performance of the features for Sym 4 mother wavelet function.*

**Figure 16.** *Change of accuracy value according to the differences between frequencies for mother wavelet functions.*

#### *3.1.1 Multiple classification results*

Presented in **Table 2** are accuracy results for multiple classification. In regard to these results, the highest performance was shown by the Ensemble Learning classifier with 52.40%.

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

#### **Figure 17.**

*Percentage of classifier where the best result is the most often obtained as a result of running the algorithms 2,520 times in total (for Haar wavelet function).*


#### **Table 2.**

*Results of multiple classification for time-domain features.*


#### **Table 3.**

*Results of multiple classification for frequency-domain features.*

#### *3.1.2 Binary classification results*

According to the binary classification results shown in **Figure 2**, the best performance was obtained with an accuracy value of 91.68% in 6–10 Hz frequency


#### **Table 4.**

*Multiple classification results of wavelet features.*


#### **Table 5.**

*Classification results of the most successful frequency pairs of the Haar mother wavelet.*

pairs based on the average of the subjects. Simultaneously, when the subjects are considered separately, a classification performance up to 100% were obtained. In addition, there is no definitive finding related to the increase in the accuracy value parallel to the difference between frequencies for the time-domain.

The results of classifiers to be expressed in the pie chart in **Figure 3** are the number of hits of the classifiers obtained. These numbers were obtained by running all algorithms 2,520 times in total. The best classification performance is shown by the Ensemble learning classifier.

#### **3.2 Frequency-domain features results**

For the frequency-domain characteristics used in the problem of determining seven different frequencies, firstly, spectrum analysis was performed to detect the stimulus frequencies more clearly than the signal. This analysis is often used to obtain frequency information in evoked SSVEP responses. The power spectrum of SSVEP signals was determined by FFT using MATLAB software to calculate its power, entropy, and variance for each band in the frequency range corresponding to the frequencies. For this purpose, the signal received FFT is divided into EEG bands (delta, theta, alpha, beta, gamma), and energy, entropy, and variance values of each band are calculated. A total of 15 feature vectors are generated.

#### *3.2.1 Multiple classification results*

According to the multiple classification results of the seven frequencies presented in **Table 3**, it was determined that the best performance was in the *Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

Ensemble Learning classifier with an accuracy value of 57.10%. Another remarkable finding here is that the results of the classifier from all individuals are the same. This shows us that, like the time-domain, the Ensemble Learning classifier performs better than others. In addition, when multiple classification results of frequencydomain features are compared with multiple classification results of time-domain features, it has been determined that there is an increase of 4.70% on an individual basis and 3.18% on average.

#### *3.2.2 Selected three class classification results*

In this part, three frequencies (6 Hz - 8.2 Hz - 10 Hz), which are considered to increase the classification performance, were chosen among the seven frequencies present in the data set, during the feature extraction phase. The reason for choosing these frequencies are the results of the study done in Ref. [12, 13, 20].

According to the results obtained (**Figure 4**), the highest classification performance for the first participant was 83.30% in the Ensemble Learning classifier, the highest 100% classification performance for the second participant was in the KNN and SVM classifiers, and 88.90% for the third participant in the KNN classifier. Finally, in the fourth participant, it was seen again in the Ensemble Learning classifier with 77.80%.

When the results are evaluated considering the classifiers, the performance of the six different classifiers was calculated by taking the average of the four participants and the highest performance was found in the Ensemble Learning classifier with an accuracy of 79.73%.

#### *3.2.3 Binary classification results*

Considering the averages of the binary classification results of frequency features, the performances obtained vary between the lowest 70.85% and the highest 100%. Accordingly, the highest performance was determined with 100% accuracy value in 7.5–10 frequency pairs.

When the results are evaluated in terms of classifiers, it is clearly seen in **Figure 6** that the classifier with the highest accuracy rate is the Ensemble Learning classifier. Runner-up classifier is the SVM classifier. Other classifiers following Ensemble Learning and SVM were identified as KNN, Logistic Regression and Naive Bayes classifiers, in order. It is also seen that no successful results have been obtained in the LDA and Decision Tree classifiers.

#### **3.3 Wavelet transform features results**

This section aims to analyze three crucial features, such as energy, variance, and entropy, which are frequently used in DWT studies, have been extracted from the bands (delta, theta, alpha, beta, and gamma) of the EEG signal. These features were generated for six different mother wavelets (Haar, db4, sym4, coif1, bior3.5, rbio2.8) commonly used in the literature. The results of each were evaluated in detail for multiple, binary, and three selected frequencies.

#### *3.3.1 Multiple classification results*

On the basis of mother wavelet selection, the results in (**Table 4**), reveal that Bior3.5 and Coif1 mother wavelets were relatively successful, although there is no dominant wavelet type. Experimenting with a larger sample size (number of subjects), in order to generalize, can help obtain more precise results.

In contrast to the mother wavelet selection, when the classifiers are evaluated, the success of Ensemble learning and LDA classifiers is clearly seen.

#### *3.3.2 Classification results for three selected frequencies*

In this analysis, as in the classification of frequency-domain features (Section 3.2.2), multiple classification was made by selecting 3 selected frequencies (6 Hz - 8.2 Hz – 10 Hz) where the differences between the frequencies were higher among the seven frequencies. However, unlike the analysis made in the frequency-domain, the selected features are classified and evaluated both they are used together, that is, when energy, variance and entropy features are used as a single feature vector (all features together, and they are used as separate features. Thus, detailed information about the power, irregularity and bias of the signal was obtained. At the same time, it is learned how to use these three features, which have the indispensable properties of the signal, more effectively. And the contribution of these features, which are frequently used in the literature, as a new form of features is wanted to be shown.

In **Figure 7**, the ACC values obtained by classification of the energy, entropy, and variance features extracted using each wavelet family are presented. Mean, minimum and maximum values of the classification results were also shown. According to these results, the values given by the Haar wavelet function for energy, entropy, and variance feature groups, which yield more successful results than other wavelet functions, were 75.85%, 73.08%, and 73.75%, respectively. There were no major differences between the mean values of the features extracted based on the Haar wavelet. However, it was seen that the entropy feature group had a 100% success rate compared to the others.

In **Figure 8**, the extracted features based on wavelet were used as a feature set, and the successful performances of the wavelet families were compared in this way. It was seen that the most successful wavelet family was the Haar wavelet function. The ranking of success in other wavelet families has not changed. The accuracy values are as follows: 75.85% with Haar mother wavelet, 67.53% with bior3.5 mother wavelet, 60.85% with db4 mother wavelet, 56.25% with coif1 mother wavelet, 52.35% with rbio2.8 mother wavelet and 44.73% with sym4 mother wavelet obtained. It was seen that some mother wavelet performances increased when compared with the ACC values in which the features in **Figure 7** were handled separately. Mean values of coif1, db4, and sym4 mother wavelet functions increased.

As a result of the classification processes performed separately for each subject, when the performances of both feature groups were examined, the most successful wavelet function was found as the Haar wavelet. When the average accuracy values of the feature groups are examined, the results in the case that the three features are used as a single feature vector gave higher results for all wavelet functions than the other feature group. Although there is no dominant result in the comparison of energy, entropy, and variance features among themselves, the highest result was seen in the entropy feature in Subject 3 with 100%.

The results of classifiers to be expressed in the pie chart in **Figure 9** are the number of hits of the classifiers obtained. With reference to results obtained, it is obvious that the most successful and also the most frequent classifier in the classification was obtained as the Ensemble classifier.

#### *3.3.3 Binary classification results*

In this analysis, feature vectors are treated as a single feature vector and individual (separate) feature vectors, similar to those in Section 3.2.3. The resulting

feature vectors were then evaluated by binary classification in order to analyze frequencies in detail. As the results of the experimental design, the classification performances are obtained for:


Each feature (energy, entropy, variance and all features together) extracted using each wavelet family. All values of the classification results are presented in **Figures 10**–**15** for each mother wavelet, respectively.

According to these results, features obtained from the Haar wavelet function yielded higher accuracies than those obtained from the other wavelet functions. Maximum accuracy performances were obtained in the frequency pairs "6–10", "6.5–8.2", "6.5–10" in the Haar wavelet (**Table 5**). When the features are evaluated, it is realized that the "All features together" feature generally has better results for all mother wavelet functions.

And another researched hypothesis results are presented in **Figure 16** for each mother wavelet, respectively. The purpose here is to show the change in the accuracy value according to the increase in the difference between the frequencies.

Finally, classification results obtained are presented in **Figure 17**. Since the classification results of all the features ranking are similar for all the wavelet functions, the classification result of the "All features together" for Haar wavelet function is presented. According to these results, the most successful classifier was obtained as the Ensemble classifier.

### **4. Conclusions**

This chapter aimed to achieve significant optimization of cortical visual responses, signal processing methods, and ML algorithms, as well as the accuracy and reliability of the superior multi-command SSVEP-based BCI system. New approaches have been explored using existing methods to develop an accurate, reliable, comfortable SSVEP-based BCI that can offer people with severe motor neuron diseases a communication alternative using attention modulation without requiring neuromuscular activities or eye movements.

As a result, the following research objectives were achieved in this study:


### **Conflict of interest**

The authors declare no conflict of interest.

### **Thanks**

We would like to thank Adnan Vilic for his support in providing SSVEP records.

*Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

#### **Author details**

Ebru Sayilgan<sup>1</sup> \*, Yilmaz Kemal Yuce2 and Yalcin Isler<sup>3</sup>

1 Izmir University of Economics, Izmir, Turkey

2 Alanya Alaaddin Keykubat University, Antalya, Turkey

3 Izmir Katip Celebi University, Izmir, Turkey

\*Address all correspondence to: ebru\_drms@hotmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Wolpaw JR, Boulay CB. Brain signals for brain–computer interfaces. In: Graimann B., Pfurtscheller G., Allison B, editors. Brain-Computer Interfaces. The Frontiers Collection. Springer: Heidelberg; 2009. p. 29-46. DOI: 10.1007/978-3-642-02091-9\_2

[2] Graimann B, Allison B, Pfurtscheller G. Brain–computer interfaces: A gentle introduction. In: Graimann B., Pfurtscheller G., Allison B, editors. Brain-Computer Interfaces. The Frontiers Collection. Springer: Heidelberg; 2010. p. 1-27. DOI: 10.1007/ 978-3-642-02091-9\_1

[3] Mason SG, Birch GE. A general framework for brain-computer interface design. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(1): 70-85. DOI: 10.1109/ TNSRE.2003.810426

[4] Ramadan RA, Vasilakos AV. Brain computer interface: Control signals review. Neurocomputing. 2017;223: 26-44. DOI: 10.1016/j. neucom.2016.10.024

[5] Abiri R, Borhani S, Sellers E, Jiang Y, Zhao X. A comprehensive review of EEG-based brain-computer interface paradigms. Journal of Neural Engineering. 2019;16 011001. DOI: 10.1088/1741-2552/aaf12e

[6] Basar E. EEG-brain dynamics: relation between EEG and brain evoked potentials. 1st ed. Brain Lang Elsevier; 1980. 411 p.

[7] Wang Y, Gao X, Hong B, Jia C, Gao S. Brain-computer interfaces based on visual evoked potentials. IEEE Engineering in Medicine and Biology Magazine. 2008;27(5): 64-71. DOI: 10.1109/MEMB.2008.923958

[8] Regan D. An effect of stimulus colour on average steady-state potentials

evoked in man. Nature. 1966;210:1056– 1057.

[9] Gao S, Wang Y, Gao X, Hong B. Visual and auditory brain-computer interfaces. IEEE Transactions on Biomedical Engineering. 2014;61(5): 1436–1447. DOI: 10.1109/ TBME.2014.2300164

[10] Zhang Y, Xie SO, Wang H, Zhang Z. Data analytics in steady-state visual evoked potential-based brain–computer interface: A review. IEEE Sensors Journal. 2021;21(2):1124-1138. DOI: 10.1109/JSEN.2020.3017491

[11] Huang X, Xu J, Wang Z. A novel instantaneous phase detection approach and its application in SSVEP-based brain-computer interfaces. Sensors. 2018; 18(12):4334. DOI:10.3390/ s18124334

[12] Sayilgan E, Yuce YK, Isler Y. Evaluation of wavelet features selected via statistical evidence from steady-state visually-evoked potentials to predict the stimulating frequency. Journal of the Faculty of Engineering and Architecture of Gazi University. 2021;36(2):593-605. DOI:10.17341/gazimmfd.664583

[13] Sayilgan E, Yuce YK, Isler Y. Evaluation of mother wavelets on steady-state visually-evoked potentials for triple-command brain-computer interfaces. Turkish Journal of Electrical Engineering & Computer Sciences. 2021;29(3). DOI:10.3906/elk-2010-26

[14] Sayilgan E, Yuce YK, Isler Y. Investigating the effect of flickering frequency in steady-state visuallyevoked potentials on dichotomic braincomputer interfaces. Innovation and Research in BioMedical Engineering. 2021;Under Review.

[15] Zhang Z, Li X, Deng Z. A CWTbased SSVEP classification method for *Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System… DOI: http://dx.doi.org/10.5772/intechopen.98335*

brain-computer interface system. In: 2010 International Conference on Intelligent Control and Information Processing; 13-15 Aug. 2010; Dalian, China. 2010. pp. 43-48. DOI: 10.1109/ ICICIP.2010.5564336

[16] Bian Y, Li H, Zhao L, Yang G, Geng L. Research on steady state visual evoked potentials based on wavelet packet technology for brain-computer interface. Procedia Engineering. 2011;15: 2629-2633. DOI: 10.1016/j. proeng.2011.08.494

[17] Vilic A. AVI steady-state visual evoked potential (SSVEP) signals dataset 2013 [Internet]. Available from: https://www.setzner.com/avi-ssvep-da taset/. [Accessed 15th August 2018].

[18] Sutter EE. The brain response interface-communication through visually induced electrical brain responses. Journal of Microcomputer Applications. 1992;15(1):31-45.

[19] Bisht A, Srivastava S, Purushothaman G. A new 360° rotating type stimuli for improved SSVEP based brain computer interface. Biomedical Signal Processing and Control. 2020;57: 101778. DOI:10.1016/j.bspc.2019.101778

[20] Sayilgan E, Yuce YK, Isler Y. Prediction of evoking frequency from steady-state visual evoked frequency. Natural and Engineering Sciences. 2019; 4(3): 91-99.

[21] Sayilgan E, Yuce YK, Isler Y. Estimation of three distinct frequencies using fourier transform of steady-state visual-evoked potentials. Duzce University Journal of Science and Technology. 2020;8(4):2337-2343. DOI: 10.29130/dubited.716386

[22] Liu W, Zhang L, Li C. A method for recognizing high-frequency steady-state visual evoked potential based on empirical modal decomposition and canonical correlation analysis. In: 2019

IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC); 15–17 March 2019; Chengdu, China. 2019. p. 774-778. DOI:10.1109/ ITNEC.2019.8729005

[23] Chen YF, Atal K, Xie SQ, Liu Q. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based braincomputer interface. Journal of Neural Engineering. 2017;14(4):046028. DOI: 10.1088/1741-2552/aa6a23

[24] Gandhi T, Panigrahi KB, Anand S. A comparative study of wavelet families for EEG signal classification. Neurocomputing. 2011;74(17): 3051-3057. DOI: 10.1016/j. neucom.2011.04.029

[25] Cao Z, et al. Extraction of SSVEPsbased inherent fuzzy entropy using a wearable headband EEG in migraine patients. IEEE Transactions on Fuzzy Systems. 2020;28(1):14-27. DOI: 10.1109/TFUZZ.2019.2905823

[26] Alpaydin E. Introduction to Machine Learning: MIT Press; 2004. 712 p.

[27] Duda RO, Hart PE, Stork DG. Pattern Classification: John Wiley & Sons;2001.

[28] Lotte F, Bougrain L, Cichocki A, Clerc M, Congedo M, Rakotomamonjy A, Yger F. A review of classification algorithms for EEG-based brain-computer interfaces: A 10-year update. Journal of Neural Engineering. 2018;15(3):1-28. DOI: 10.1088/ 1741-2552/aab2f2

[29] Sayilgan E, Yuce YK, Isler Y. Determining gaze information from steady-state visually-evoked potentials. Karaelmas Science and Engineering Journal. 2020;10(2):151-157. DOI: 10.7212/zkufbd.v10i2.1588

[30] Narin A, Isler Y, Ozer M. Comparison of the effects of cross

#### *Brain-Computer Interface*

validation methods on determining performances of classifiers used in diagnosing congestive heart failure. DEÜ Mühendislik Fakültesi Mühendislik Bilimleri Dergisi. 2014;16 (48):1-8.

[31] Jung Y, Hu J. A k-fold averaging cross-validation procedure. Journal of Nonparametric Statistics. 2015;27:1-13. DOI: 10.1080/10485252.2015.1010532

[32] Jiao Y, Du P. Performance measures in evaluating machine learning based bioinformatics predictors for classifications. Quantitative Biology. 2016;4(4): 320–330. DOI: 10.1007/ s40484-016-0081-2

[33] Cetin V, Ozekes S, Varol HS. Harmonic analysis of steady-state visual evoked potentials in brain computer interfaces. Biomedical Signal Processing and Control. 2020;60(2020):101999. DOI: 10.1016/j.bspc.2020.101999.

#### **Chapter 9**

## Brain Computer Interface Drone

*Manupati Hari Hara Nithin Reddy*

#### **Abstract**

Brain-Computer Interface has emerged from dazzling experiments of cognitive scientists and researchers who dig deep into the conscious of the human brain where neuroscience, signal processing, machine learning, physical sciences are blended together and neuroprosthesis, neuro spellers, bionic eyes, prosthetic arms, prosthetic legs are created which made the disabled to walk, a mute to express and talk, a blind to see the beautiful world, a deaf to hear, etc. My main aim is to analyze the frequency domain signal of the brain signals of 5 subjects at their respective mental states using an EEG and show how to control a DJI Tello drone using Insight EEG then present the results and interpretation of band power graph, FFT graph and time-domain signals graph of mental commands during the live control of the drone.

**Keywords:** Brain Computer Interface, fast Fourier transform, emotiv insight, DJI Tello drone, band power, EEG, Neuroscience, Machine Learning, Signal Processing

#### **1. Introduction**

The brain computer interface (BCI) technology makes the possible manipulation of embedded systems using signals generated by brainwaves. A characteristic of the BCI system can easily capture brain signals generated by neural activities, it can also recognize differently firing neural activity patterns, and these signals can transform them into useful commands [1]. These commands can be utilized to control the machines or the devices. BCIs are most commonly applied in prosthetic limbs for paralyzed patients, exoskeletons, robotics, autonomous vehicles, virtual keyboard and computer games [2]. The BCI system can be classified as invasive, non-invasive (these are classified based on the location of placement of EEG biosensors). Noninvasive BCIs are based on electroencephalography (EEG) to record the brain activities using a series of biosensors disposed on the scalp will be able to measure the potential generated by the electrical activity of thousands to billions of cortical neurons inside our brain [3]. Our study is focused on noninvasive BCI using an Electroencephalogram. The neocortex is a convoluted surface which resides at the top of the brain. It is about ⅛ of 1 inch thick. It has 30 billion neurons arranged in 6 layers. Each neuron makes around 10,000 synapses with other neurons, which results in around 300 trillion connections in the total [4]. The most common type of neuron in the cortex is the pyramidal neuron, populations of which are arranged in columns oriented perpendicular to the cortical surface. The surface of the cortex is convoluted, with Fissures sulci, Ridges gyri. The neocortex exhibits functional specialization. Each area of the cortex is specialized for a particular function. The occipital areas near the back of the head specialize in basic visual processing [5]. The parietal areas towards the top of the head specialize in spatial reasoning and motion processing [6]. Visual and auditory recognition occurs in the temporal areas (towards the sides of the head) while frontal areas are involved in planning and higher cognitive functions. Inputs to a cortical area mainly come into the middle layers, Outputs of the cortical area leave from the upper and lower layers [7]. Based on these input–output patterns, the cortex roughly acts as an organized network of sensory, motor areas. Coming to the EEG, it is a device that extracts, organizes, and filters the electric signals which exist due to the neural firings (action potential) of the brain it is used for various diagnosing purposes, it is a popular non-invasive technique for recording the neuronal firing using electrodes placed on the scalp. The currents originating deep in the brain due to the firing of the neurons are not detected by EEG because the voltage fields will fall off with the square of the distance from the source [8]. The time domain signal displays the signals from different electrodes in a graph known as electroencephalography. EEG will reflect the summation of postsynaptic potentials occurring due to firing of thousands of neurons which are oriented radially to the scalp but not due to tangential electrodes. The spatial resolution of EEG is poor in a square centimeter range because of the impedance caused due to the presence of skull, scalp, CSF, meanings [9]. These layers' act as volume conductors and low pass filters to smear the original signals, whereas coming to the temporal resolution is good at the range of milliseconds [9]. This time domain signal from EEG is then converted into frequency domain signal using different transforms in the signal processing such as discrete Fourier transform, fast Fourier transform, etc. The amplified frequencies (according to the fast Fourier transform) which are extracted from brain by electroencephalogram into 4 ranges they are theta (θ) which ranges from 4 Hz to 8 Hz, alpha (α) which ranges from 8 to 12, beta (β) ranges from 12 to 25 Hz and finally gamma (γ) ranges minimum from 25 Hz to maximum of 45 to 75 Hz [10]. After performing many experiments on many patients specifically to observe the type of waves and the amplified frequencies (when will they occur, in what state of patient these waves can be observed) they have presented a generalized form of relation between their frequency ranges and normal human functions. When a person is ready or about to perform tasks or if he/she is in an alert state then more percentage of α frequency waves are generally observed and if a person is task oriented or if he is in a busy state or anxiously thinking or actively concentrating then high percentages of β frequency waves are generally observed, if a person is performing high motor functions, or if the person is switching the activities during multitasking then high percentage of γ frequency waves are observed mostly in the frontal lobe of the human brain. After performing several tests, I was able to predict that in my meditation state a high percentage of θ frequency waves were observed even in a sleeping state where the mind is in a relaxed condition there are high percentages of θ frequency waves. The Emotiv Insight is an EEG Brain wear device which is composed of five sensors that are projected to acquire and measure the key activity from the entire functional areas of the cortex. The device can provide raw EEG Signals, Mental Commands (conscious thoughts), Facial Expressions - Facial mimicry and Measurements of performance of the brain. The principal key characteristics of this scientific design is the dynamic brain-computer interface interactions with more degrees of freedom for controlling physical and virtual objects. The device accurately identifies mental states and emotion such as Engagement, Focus, Excitement, Meditation, Relaxation, Stress [11]. There is a possibility to build brain activity models in real-time based on spatial resolution. A deeper perspective on specific patterns of an individual's brain activity. The very important problem in EEG processing is low signal to the noise ratio since there are many layers between the neural cortex and the scalp and also due to the artifacts which result with great amplitude, the solution to minimize the noise is we can use filtering techniques and

#### *Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*

noise reduction techniques to remove the noise from the raw EEG data and extract the brain activity signals [12]. Since the EEG signal is non-stationary signal we use classifiers which are trained on user data (which is limited, this is also another main problem) we can generalize those results poorly to the already trained data on the same individual (different for different individuals because of physiological differences, this also limits the use of EEG applications) at different times. The accuracy might increase as we increase the number of training sessions but generalizing for subjects, i.e., to handle inter-subject variability processing pipelines with domainspecific approaches are used in order to clean, extract relevant features and classify (Riemannian geometry based classifiers, adaptive classifiers) EEG data. The subset of Machine Learning which is Deep Learning is used mainly to extract the features, Recently CNNs (convolutional neural networks) are used to simultaneously extract the feature and the classifier in order to achieve end to end supervised feature learning. Hence the devices use CNNs and recurrent neural networks of 3 to 10 layers in total [13].

#### **2. Common EEG patterns**

#### **2.1 EEG**

In most of the Neurosurgery hospitals or the hospitals where the diagnosis of the human brain takes place, they mostly use EEG which have very high Temporal and spatial resolutions, devices also occupy huge amounts of space. But in the study of Brain Computer Interface wherein we are required to develop the applications to control the external environment in such a way that the patient or subject or the user must not be facing the adaptability issues to significant extent [14]. If we observe the EEG devices which are used in the Hospitals, they generally require minimum of 2 to 3 hours for just equipping the device or placing the electrode in proper locations over the scalp, where in the contact optimizing fluid needs to be applied all over the head and the electrodes needs to be placed which is a tedious and complex process. But recently the company Emotiv has come up with a very sophisticated and easily adaptable device named as INSIGHT. This Insight device is equipped with very efficient specifications, where in which It has 3 axis gyroscope and 3 axis Magnetometer which are very helpful to remove the artifacts due to the head movements which is absent in the case of hospital EEGs (therefore the patients is instructed not to move their head/to avoid the motor movements). There are 5 important electrodes installed in the device which are made of semi-dry polymer. The electrode locations are 2 in the frontal region, 2 in the temporal region and another is at the central peritoneal region. The nomenclature of these electrodes are given according to the international 10–20 system. Where the frontal left electrode is named as AF3, frontal right electrode is named as AF4, temporal left electrode is named as AT7, temporal right electrode is named as AT8 and peritoneal central electrode is named as Pz plus DRL reference mastoid electrode on the left, the channels with built in digital 5th order sinc filter, bandwidth of 0.5-43 Hz with digital notch filters at 50 Hz and 60 Hz, 2.4 Hz wireless connectivity, 8400uV is the dynamic range (input referred), sequential sampling, 128 samples per second is the sampling rate, 14bit motion resolution, the electrodes are semi dry polymers, with 14bits 1 LSB = 0.51muV (16 bit ADC, 2 bits instrumental noise floor discarded) EEG Resolution. Main principle of EEG devices is the differential amplifier which has 2 inputs Input 1 and Input 2. From both the inputs the information is fed, the information can be mathematical or any signal it gives the output resultant signal which is the relative deference chunk of signal. Mounting the device over the

subject's scalp is a crucial procedure, where saline glycerol solution is applied to the semi dry electrodes in order to maintain the optimum contact quality with the scalp of the head. The electrodes were being placed in active locations according to the International 10–20 system protocol and the optimal 100% contactivity is ensured. The subject is instructed to not to move to avoid motion artifacts and stay focused on mental commands. Connection between the insight and the laptop is achieved by following the EmotivApp protocol, connection can be established either using Insight dongle or through the Bluetooth connection. Authentic Interpretation of EEG requires a very high amount of training and experience in analyzing and predicting the Graphical data. The most important set of rules which needs to be followed while analyzing the EEG data is the type of montages used, the time domain of the subject state compared to the EEG data at that current time in order to check that there are no external noises or movements made by the subject. There are different types of montages such as longitudinal bipolar Montage, longitudinaltransverse bipolar montage, circumferential bipolar montage, temporal bipolar montage, Cz referential Montage, Ipsilateral ear referential Montage in order to analyze the patient's cognitive state in different ways in order to predict the correct result. In the EmotivePro app in MacBook Air laptop to record raw EEG data for the experiments I have used an Ipsilateral ear referential montage where one input of the differential amplifier will be a DRL reference mastoid electrode and the other input will be any one of the active electrodes. I set up the channel spacing to 400 μV minimum amplitude to �100 μV, maximum amplitude to +100 μV, with a high pass filter.

#### **2.2 EEG pattern in eye blinks**

As we know that the cornea is positively charged and the retina is negatively charged, when there is a movement of the eyes towards upwards due to the bell's phenomenon **Figure 1**, when eyes are closed this will result in the abnormal signals in the frontal electrodes [15]. There is an upward peak then downward peak when eye is blinked once, I blinked the eye continuously therefore crist and troughs continuously occurred only at the AF3, AF4 frontal electrodes in the **Figure 2**, this is because of the bells phenomenon when the eyes are closed the cornea moves up due to which there is a change is potential observed near the frontal electrodes, there is an upward peak at the first when the eyes are closed which implies that there is a relative positive potential of AF3/AF4 with the reference electrode and when eyelids opened there is a downward peak which implies that there is a relative negative potential of AF3/AF4 with the reference electrode.

**Figure 1.** *Bell's phenomenon during eye blink.*

**Figure 2.** *Crests and troughs observed at AF3,AF4 electrodes in common reference montage EEG data.*

#### **2.3 Meditation**

To cross check a theoretical aspect of the brain frequencies also in order to validate and prove that the device is authentic, we performed meditation mental tasks for diverse subjects and analyze their signal frequencies of their respective time domain signals using the Fast Fourier Transform (FFT) in the MATLAB.

#### *2.3.1 Test dataset acquisition and observation*

The test is being conducted in an anechoic chamber as we can observe at the background of the subjects in Table, in order to minimize the external noise and to isolate the experiment. The subject's voluntary consent has been taken prior for performing this particular test experiment. Also, the subjects are strictly advised to be in a relaxed mental condition for 10 minutes before the test perusal and before 30 seconds of the performance of the test, the subject's eye movements (closed state and opened state) also analyzed by the EmotivPro application in order to remove the eye blink artifacts. All subjects have performed the test successfully as instructed.

As the subject's experiments begin, their mental state is being captured in the camera and the state at which the subjects are present is being noted and presented in the mental state column of **Table 1**. After ensuring the optimum contact quality, the EEG data following common reference montage is analyzed in the live, with EmotivPro license, it is possible to record the current EEG data of its respective electrodes.

After performing the tests on the subjects, the raw EEG data which is recorded during experimentation is stored in the EmotivPro application cloud. This recorded data is exported to the client server system in the .csv format, **Figure 3** these files can be assessed through the links provided in the above **Table 1** for the respective subject test.

This .csv file contains the recorded potentials of all 5 channels (AF3, AF4, T7, T8, Pz) at their respective timestamp. This data is copied and imported into the workspace of MATLAB. Database toolbox is used to read, write, import and export the data of .csv files **Figure 4**. Digital signal processing toolbox is used for converting time domain signal to frequency domain signal.

The main principle of fast Fourier transform is it converts the time domain signal into frequency domain signal. As the raw data is injected into the FFT in matlab, it analyzes the frequencies of the time domain signals. The xfft vs. absolute


#### **Table 1.**

*The table displays the specific subject with their respective mental task.*

part of the FFT gives us the frequency domain signal. In this case, there is a large amount of noise observed in the frequency domain graph for the test dataset. Hence, in order to remove the noise and the high peak which is near to zero, the smoothing filters were applied independently and a band pass filter is also applied between 4 to 45 Hz, as the result of the experiment lies within that particular frequency range.

#### *Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*


#### **Figure 3.**

*The imported data in the workplace.*


### **Figure 4.**

*The subject's EEG data in the work space.*


#### *Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*

#### **Table 2.**

*This table represents the experimental analysis graphically.*

#### *2.3.2 Test results*

The smoothing filter, band pass filter along with FFT in combination resulted in best output for the frequency domain analysis and the results are observed, noted and represented graphically in **Table 2** as there is time domain graph at the top and frequency domain graph at the bottom in the experimental analysis column in **Table 2**.

As most of the subjects are performing the meditative test they should experience a very high amount of theta waves they ranged between 5.1 to 7.6 Hz i.e., there will be a maxima peak within the range of 5.1 to 7.6 Hz in the frequency domain signal and also when the subject performs a demanding motor task he experienced a high amount of gamma waves with maxima peak at 35.9 Hz (**Table 3**).

Hence, the test results prove that the subject's mental state falls within specific range when a specific mental task is performed. This ends the validation part of the Emotiv Insight EEG device.


#### **Table 3.**

*This table represents the frequency analysis of subjects numerically.*

#### **3. Common EEG patterns**

After connecting the device with the EmotivApp, the EmotiveBCI application is used to train mental commands. After training, the live mode is switched ON to control a cube with our Imagery thoughts. Next those live mental commands should be extracted from the application to integrate with the drone. This is achieved through cortex API documentation, for the ease I have edited the code in python language. Then simultaneously those commands should be integrated to a dji tello drone. This is achieved through dji tello API documentation. Finally a link between the drone and Insight brainware is achieved.

#### **3.1 Mental command training**

Giving mental command to the EmotivBCI application will result in the movement of the object in the desired direction. Initially numerous training needs to be done to the device in the initial stage to get good desired output. For example during an object movement test when the subject is thinking to move the cube towards the right direction we can observe this in the **Figures 5** and **6** represented below that there is a movement of the cube towards the right direction. Each of the Neutral, moving left, lift, drop mental commands were trained for 10 times.

#### **3.2 Extraction of mental commands and assigning them to the drone**

The python code is built in Atom software editor in Dell Inspiron laptop. Jason, websocket, ssl, time, win32, requests, pyautogui, socket, keyboard, threading are the libraries used in the code. The control of the drone with the computer is achieved using DJI Tello drone protocol **Figure 7**, where the drone is connected to the local wifi and the laptop should also be connected to the same wifi using a TP-Link USB Wi-Fi Adapter which is used for PC(TL-WN725N) it's an N150 Wireless Network Adapter for laptop, UDP protocol is used to make an interface between the computer system and dji tello drone, it should be explicit and bind to a local port on our computer where tello can send messages. The functions that listen for messages from tello will be printed on the screen. Connection between the insight and the laptop is done according to the Emotiv App protocol, connection can be established

**Figure 5.** *The object movement test of the subject, live mode in BCI App when the person is neutral.*

#### **Figure 6.**

*The object movement test of the subject, live mode in BCI App when the person thinks of moving the object left from neutral, the cube has moved left.*

#### **Figure 7.** *Procedure of interfacing EEG with a drone.*

either using Insight dongle or through the Bluetooth connection. After connecting the device with the EmotivApp, the EmotiveBCI application is used to train mental commands. After training, the live mode is switched ON to control a cube with our Imagery thoughts. Next those live mental commands should be extracted from the application to integrate with the drone. This is achieved through cortex API documentation. Then simultaneously those commands should be integrated to a dji tello drone. This is achieved through dji tello API documentation. Finally a link between the drone and Insight brainware is achieved. Coming to the Insight, InsightHandler class is used then the web connection is acquired, In order to get the approval form the Emotiv app we have to generate client id and secret then we have to give the approval in the emotive app. After approving then authorisation happens to generate the token this token will create the session and loads the profile from the cortex app, then we are going to start and stream the mental commands data in our terminal, these commands will be integrated with the our keyboard clicks, these clicks will intern control the drone as a controller, so basically as we are thinking of a mental command which will be controlling our computers keyboard this will intern control the drone.

#### **4. Result**

After several attempts the drone control was modified and whenever the power meter of a mental command crosses 50% then that specific keyboard key will be

#### *Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*

clicked. For example if the subject is in a neutral state and suppose he is thinking of a lift command and simultaneously if the power meter of that lift mental command is at 50% or crosses 50% then that command will be executed and the computer keyboard key which is assigned to the push will be clicked this will intern transmits the thrust command from the laptop and drone will receive this command and lifts off the floor. In the terminal window and live mode in the Mental command window controlling a cube are represented in the **Figures 8**–**10**. The cube is in the

**Figure 10.** *Cube moving downwards.*

**Figure 11.** *Drone in the neutral state.*

**Figure 12.** *The drone takes off.*

#### *Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*

neutral position therefore we can observe neutral in the terminal window in **Figure 8**. When the person is thinking of lifting the cube, the cube changes from its neutral position and moves upwards therefore we can observe change from neutral to lift in the terminal window, **Figure 9**. The cube is in the neutral position therefore we can observe neutral in the terminal window, **Figure 10**.

The drone is in the resting state when the mental command of the subject is neutral, **Figure 11**. The drone takes off when the mental command of the subject has changed from the neutral to lift, **Figure 12**.

#### **5. Conclusions**

EEG patterns such as motor movements, eyes movements, meditation, sleeping tests are recorded and analyzed. A mathematical model is developed in the MATLAB using concepts such as signal processing in order to analyze the theoretical data of the brain frequencies at different mental states. The validation of the device by analysis of raw signal has been performed using signal processing methods such as fast Fourier transform simultaneously applying filters to extract the signal of interest by removing noise. These EEG patterns are analyzed on 5 different subjects and cross validated the data with the theoretical brain frequencies data. After these above experiments, Interface between DJI Tello Drone and Emotiv Insight BCI headset was achieved and the Drone was controlled with 2 mental commands moving up and moving down from the neutral state.

#### **Acknowledgements**

Inspiration, motivation and presentation have always played a key role in the success of any venture. I express my sincere thanks to "Prof. S. Murugesan" Associate Professor, Department of Pharmacy, Birla Institute of Technology and Science, Pilani being my supervisor and encouraging me to the highest peak and providing me the opportunity to prepare for this research. I feel to acknowledge my indebtedness and deep sense of gratitude to "Dr. Anantha Krishna Chintanpalli" Assistant Professor, Department of Electrical and Electronic Engineering at BITS Pilani for being co-supervisor whose valuable assistance to me in this course has shaped my present work. I am immensely obliged to my friends for their elevating inspiration, encouraging guidance and kind supervision in the completion of my project. Last, but not the least, my parents are also an important inspiration for me. So with due regards, I express my gratitude to them.

#### **Appendices and nomenclature**



### **Author details**

Manupati Hari Hara Nithin Reddy Department of Pharmacy, Birla Institute of Technology and Science, Pilani, Rajasthan, India

\*Address all correspondence to: f20171067@pilani.bits-pilani.ac.in

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Brain Computer Interface Drone DOI: http://dx.doi.org/10.5772/intechopen.97558*

#### **References**

[1] Rao, R. (2013). Introduction. In *Brain-Computer Interfacing: An Introduction* (pp. 1-4). Cambridge: Cambridge University Press. DOI: 10.1017/CBO9781139032803.002.

[2] Al-Quraishi MS, Elamvazuthi I, Daud SA, Parasuraman S, Borboni A. EEG-Based Control for Upper and Lower Limb Exoskeletons and Prostheses: A Systematic Review. *Sensors (Basel)*. 2018;18(10):3342. Published 2018 Oct 7. DOI: 10.3390/s18103342.

[3] McFarland DJ, Wolpaw JR. EEG-Based Brain-Computer Interfaces. Curr Opin Biomed Eng. 2017;4:194-200. DOI: 10.1016/j.cobme.2017.11.004.

[4] Wikipedia contributors, 'Neocortex', *Wikipedia,The Free Encyclopedia,* 11 January 2021, 00:52 UTC, <https://en. wikipedia.org/w/index.php?title=Neoc ortex&oldid=1017132329> [accessed 12 January 2021]

[5] Wikipedia contributors, 'Occipital lobe', *Wikipedia,The Free Encyclopedia,* 25 March 2021, 16:11 UTC, <https://en. wikipedia.org/w/index.php?title=Occ ipital\_lobe&oldid=1014174641> [accessed 16 January 2021]

[6] Ackerman CM, Courtney SM. Spatial relations and spatial locations are dissociated within prefrontal and parietal cortex. J Neurophysiol. 2012; 108(9):2419-2429. DOI: 10.1152/ jn.01024.2011

[7] Jawabri KH, Sharma S. Physiology, Cerebral Cortex Functions. [Updated 2021 Feb 14]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2021 Jan-. Available from: https://www.ncbi. nlm.nih.gov/books/NBK538496/

[8] Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents–EEG, ECoG, LFP and spikes. *Nat Rev Neurosci*. 2012;13(6):407-420.

Published 2012 May 18. DOI: 10.1038/ nrn3241

[9] Burle B, Spieser L, Roger C, Casini L, Hasbroucq T, Vidal F. Spatial and temporal resolutions of EEG: Is it really black and white? A scalp current density view. Int J Psychophysiol. 2015;97(3): 210-220. DOI: 10.1016/j.ijpsycho. 2015.05.004.

[10] Hamzei, Nazanin. 2017. "Analysis of Low-Noise EEG in Search of Functional Gamma Band Correlates." Electronic Theses and Dissertations (ETDs) 2008+. T, University of British Columbia. DOI: 10.14288/1.0345615.

[11] Sánchez-Reolid, R.; García, A.S.; Vicente-Querol, M.A.; Fernández-Aguilar, L.; López, M.T.; Fernández-Caballero, A.; González, P. Artificial Neural Networks to Assess Emotional States from Brain-Computer Interface. Electronics **2018**, *7*, 384. DOI: 10.3390/ electronics7120384.

[12] Jiang X, Bian GB, Tian Z. Removal of Artifacts from EEG Signals: A Review. *Sensors (Basel)*. 2019;19(5):987. Published 2019 Feb 26. DOI: 10.3390/s19050987.

[13] Alexander Selvikvåg Lundervold, Arvid Lundervold, An overview of deep learning in medical imaging focusing on MRI, Zeitschrift für Medizinische Physik, Volume 29, Issue 2, 2019, Pages 102-127, ISSN 0939-3889, DOI: 10.1016/ j.zemedi.2018.11.002.

[14] Shih JJ, Krusienski DJ, Wolpaw JR. Brain-computer interfaces in medicine. Mayo Clin Proc. 2012;87(3):268-279. DOI: 10.1016/j.mayocp.2011.12.008.

[15] Wikipedia contributors, 'Bell's phenomenon', *Wikipedia,The Free Encyclopedia,* 1 January 2021, 00:32 UTC, <https://en.wikipedia.org/w/index.php? title=Bell%27s\_phenomenon&oldid= 997534506> [accessed 12 January 2021].

### *Edited by Vahid Asadpour*

Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems.

### *Andries Engelbrecht, Artificial Intelligence Series Editor*

Published in London, UK © 2022 IntechOpen © shumpc / Dollarphotoclub

Brain-Computer Interface

IntechOpen Series

Artificial Intelligence, Volume 9

Brain-Computer Interface

*Edited by Vahid Asadpour*