Introductory Chapter: Human-Robot Interaction – Advances and Applications

*Helen Meyerson, Parthan Olikkal, Dingyi Pei and Ramana Vinjamuri*

## **1. Introduction**

Recent advances in robotic technology are bringing about robots better suited to perform tasks and applications in which robots are interacting directly with people in their everyday environments, both at home and in the workplace. Human-robot interaction (HRI) is beneficial because robots have been shown to deliver an emotional response to humans and humans find robots engaging. Additionally, robots can integrate into everyday settings without difficulty and can be perceived by humans as active social agents, meaning they can complete the programmed tasks with total control, independence, and intentionality. With HRI, a user's experience of interaction varies from person to person and is influenced by many factors such as physical context of the environment, cultural context, thoughts and feelings toward the robot, and social nature [1].

HRI is also an important development because it allows robots to be directed by humans to complete certain challenging and hazardous tasks, notably in an industrial setting. With modern computational algorithms programmed into the environment, HRI can increase productivity and reduce downtime and task interruptions [2]. Additionally, HRI is a beneficial solution to compensate for a lack of human labor force in a certain setting, due to various factors such as extreme conditions or low pay. The lack of human labor hurts the local or large-scale economy as it means a lower production supply, and this issue can be resolved potentially by incorporating robots into the scene. However, fully replacing humans with robots would mean a larger initial investment and would eliminate availability of jobs. Instead, robots could be incorporated alongside human workers as a means to improve human comfort and optimize productivity. HRI is a significant modern approach to improve the functioning of everyday settings and has countless advantages and applications.

## **2. Collaborative and humanoid robots**

Collaborative robotics is the field of study that involves using human demonstration to teach robots different skills. The robot can learn to recognize goal-oriented actions and understand human actions and verbal and nonverbal communication. While robots can learn from imitation, in a complex environment where different

situations arise, imitation is not enough to make the robot able to function in the complex environment by itself without human involvement. Collaborative robots can work alongside humans on tasks and can provide assistance by responding to user requests for help or by automatically detecting at what point to assist. Thus, in a collaborative environment, both parties must have the ability to refer to objects in the shared space. Humans can use a combination of various techniques, including sensorimotor signals, verbal cues, pointing gestures, and gaze to communicate to the robot to handle a certain object [3].

Humanoid robots are designed to resemble humans in terms of appearance. They have continued to increase their roles in everyday human environments as coworkers, companions, trainers, and assistants. Humanoid robots are created to be similar to humans both in outward design and language and gesture behaviors. In designing robots to play roles alongside humans, it is important to investigate how humans interpret and emotionally respond to the robot to allow for a smooth incorporation into our everyday lives. Humans have been demonstrated to engage with and respond especially well to humanoid robots. Humanoids were seen as having more moral responsibility, observing social norms, and generating formal expressions from their human counterparts communicating with them [4].

Telerobots perform routine tasks under supervisory control by humans. The human supervisors monitor and reprogram the robots at irregular intervals to execute different pieces of the higher-arching task. Telerobots are designed to simplify communication with humans and improve the ease of human control. It is important that the telerobot is directed to complete the task as efficiently as possible while the human operator is comfortable controlling the telerobot, even during chaotic situations. Moreover, they can be instructed to carry out tasks in environments that are hazardous or inaccessible to humans. Additionally, telerobots have greater precision than human hands, which may come useful in many different settings such as surgeries [5].

**Figure 1.** *Overview of significant applications of human-robot interaction.*

Human-robot interaction is an area of research that involves developing and improving the most optimal robots that cooperate with humans. An overview of current and potential applications of HRI is illustrated in **Figure 1**. In the subsequent sections, we discussed each of these applications and challenges in detail.

## **3. Space exploration**

Emerging technologies in HRI look promising to efficiently combine the capabilities of astronauts, remote operators, and robotic assets into human-machine teams that can effectively communicate for the purpose of space exploration. These technologies have been carefully planned to meet sustainability requirements and minimize the use of resources. The use of HRI can be especially beneficial to complete space exploration tasks such as collecting environment and mapping information, providing situational awareness of the scene and surroundings, developing and maintaining infrastructure, and providing mobility support to the astronauts. The future of successful space exploration will be heavily influenced by the ability of the human and robot to demonstrate strong communication through both gestures and dialogue and to collaborate with one another for problem-solving [6].

One such HRI technology is Explainable AI (xAI), which can provide a virtual deep-space environment simulation that can show how the space rover will behave in a certain scenario, and the human controller can prepare strategies and informed decisions to apply during the actual deployment. An additional technology is virtual, augmented, and mixed reality (VAMR), which provides visual displays, situational awareness, and additional functionality and communication. The navigation cues and technology recognition that VAMR provides can guide the rover in effectively investigating an unfamiliar terrain. Another emerging technology is adaptive and adaptable automation. Adaptive control is where the robot automatically adjusts control parameters as a system response, while adaptable control is where the human controller operates manual system changes. This technology is an optimal design that balances the self-adjusting robots and the significance of human monitoring, aiding the efficiency and safety during space exploration [7]. The use of HRI along with the emerging technologies in the area of space exploration expands the possibilities for new learnings and discoveries.

## **4. Military**

The future of military robots puts soldiers and robots as teammates, where the soldier and robot can share the task load and accomplish the goal together. In this environment, the robot is an important entity that acts autonomously and intelligently and can simulate team behaviors such as communication and coordination [8]. Robots are able to complete operations in environments that are harmful to the soldiers, and this keeps the soldiers and civilians safe. These operations include clearing buildings, search and rescue in disaster areas, detecting explosives, and surveillance activities. Additionally, military robots can support the soldiers by gathering data to improve situational awareness, transport equipment, efficiently distribute supplies, facilitate commanders' decision-making, and protect the soldier from hostile attacks.

To make HRI integration possible in this setting, a multitude of factors have to be considered including operating environments, task difficulty, soldier's comfort level with the robot, and communication and decision-making for both the soldier and robot. Emerging technologies, both modeling and simulation systems, have been developed to identify and resolve potential integration issues. One such modeling system is the Improved Performance Research Integration Tool (IMPRINT). IMPRINT analysis demonstrated that integrating HRI into a mission with soldiers mounted in carrier vehicles or on horses would cause overload issues, and, therefore, gunners were a better-suited group for HRI integration. Modeling technologies such as IMPRINT can be used to set guidelines that can be validated through simulations, and the models can be revised and improved. Simulations are particularly helpful to determine the effects of adding complexity to the tasks, considering potential strategies to reduce overload and investigating ways to improve performance while carrying out the military task [9].

## **5. Healthcare**

Robots taking on roles as healthcare workers have incredible benefits for the population. These include accuracy in treatment performance, strong working speed, reducing workload for the human healthcare worker, organization of daily routine, optimizing healthcare resources, and resolving simple problems so that the patient does not have to visit the doctor [10]. The elderly population is increasing in size and the available supply of healthcare workers concerningly cannot support the population increase of this demographic. This demographic especially has the potential to improve well-being as a result of interaction with healthcare robots. Robots as healthcare workers allow elderly adults to be at home later in life instead of in an elder care facility, which reduces financial and emotional stress for the patient and the family, lowers costs, and helps elderly adults retain independence and be happier and healthier.

Healthcare robots can serve in rehabilitation or social roles. Rehabilitation robots can perform tasks or make tasks easier for the user, while social robots are for elderly adults to interact with and have as a companion. It is important to consider the concerns and needs of elderly adults during the robotic design process so that the user will accept the robot. Some elderly adults have demonstrated to be skeptical of accepting the robot due to it being a rapid jump in technology and the potential privacy issues it may present, but they are more likely to accept the robot if it can perform tasks that they find useful [11].

Conditions that the elderly population face that HRI provides technological improvements for include physical and functional decline and cognitive decline. Healthcare robots can assist elderly adults with tasks that become more difficult due to these conditions. Emerging technologies can help with tasks impacted by physical and functional decline such as cleaning, heating food, and sorting laundry. Robotic developments in the areas of mobility assistance and other activities such as bathing have also been in the works. Healthcare robots make these activities safer and more comfortable for the patient. Other technologies help patients monitor their health conditions and provide appointment reminders. Robotic technologies help with cognitive decline by providing cognitive training exercises that keep the patients engaged and stimulated.

The COVID-19 pandemic has only furthered the growing shortage of healthcare workers. Throughout the pandemic, healthcare robots were used for a high variety of purposes, including health screenings, transportation of medical goods, and even

### *Introductory Chapter: Human-Robot Interaction – Advances and Applications DOI: http://dx.doi.org/10.5772/intechopen.109343*

direct patient care. The robots provided a multitude of benefits, including minimizing human contact and, therefore, reducing transmission rates and decreasing the workload on healthcare workers. The technologies used during the pandemic were adapted from preexisting technologies, as this approach was more efficient than developing new technologies during a crisis. For example, the Guangzhou Gosuncn Robot Company developed robots originally intended to be used for policing, but these robots were modified and equipped with powerful cameras to screen the body temperature of up to 10 people at once and detect if an individual is wearing a facemask. The COVID-19 pandemic demonstrated how important it is to formulate reliable protocols for how to adapt preexisting technologies for healthcare purposes if and when a future pandemic or crisis occurs [12]. This will allow for the most organized treatment possible and the most efficient patient path to recovery.

## **6. Manufacturing**

HRI provides benefits in manufacturing in terms of productivity, safety, and working conditions. HRI is an approach that complements the strengths of humans and robots in manufacturing. This approach would make manufacturing a more sustainable career for individuals in the long term, as the incorporation of robots allows workers to avoid hard physical work. This also means a reduction in illness rates. Additionally, productivity increases with this approach because robot workers do not need downtime or on-the-job training. HRI can reduce running costs and speed of assembly, and improves readability and precision. Multiple factors need to be considered when implementing HRI into a setting to optimize performance. These include movement speed of the robot, distances between humans and robots, robot noises, trajectory of the robot, and physical appearance of the robot [13].

Robots can especially be helpful in assisting humans in the areas of delivering tools and parts and holding manufacturing equipment objects or objects in the process of being assembled. Robots can contribute accuracy, speed, and consistency to the setting, while humans contribute organization, management, and more cognitive assets. By sharing the workspace, situation awareness, danger perception, and enrichment communication are promoted. Modern robot designs often are programmed with advanced sensing, joint compliance, and artificial intelligence. Robots can play impactful roles in individual parts of the manufacturing setting, or they can contribute to the setting in a broader sense. For example, in a narrower role, the robot can control manufacturing tools or feeder equipment such as conveyors and loaders. In a broader sense, the robot's state-of-the-art design and advanced technology give it the ability to contribute to process flow control and the maintenance of workplace safety.

Manufacturing settings vary in many ways, including plant size, wealth, and typical size of produced batch. HRI is beneficial for industrial settings of all sizes, so it is important to find ways to make HRI more accessible for small and mid-size enterprises (SMEs), which have fewer resources, to begin with, and are less likely to take risks with their manufacturing model [14]. SMEs play a critical role in the economy, and this emphasizes the need for SMEs to adapt to modern technologies so that they can optimize consumerism. In the UK, 99% of the 5.6 million businesses are SMEs.

One approach to increase HRI in SME settings is to identify individual motivation by creating a model that supports fulfilling an overall goal by achieving predefined subgoals. For example, faster and more efficient destocking of assembled parts contributes to greater productivity. The predefined subgoal, in this case, is the faster

destocking of assembled parts, which is an area that HRI can demonstrate strong support to. This approach allows the SME to identify the most suitable technologies for the assembly setting without having to use up resources to trial different technologies that may or may not be optimal. HRI has incredible benefits for manufacturing, and it is important to determine the most efficient way to incorporate it into the setting.

## **7. Education**

HRI has shown promise in the area of being learning companions for children in classrooms and at home, and as tutors to help students better understand the content. HRI has been demonstrated to be beneficial for students of all ages, including preschool, elementary school, and post-secondary education. The use of HRI can help teach a broad range of disciplines, including STEM, languages, and handwriting. Aside from adaptability to a wide range of disciplines, robots provide additional benefits in education settings such as engagement, motivation, improving the learner's self-esteem, and providing empathetic feedback. When designing robots for this purpose, it is important for developers to consider the social conscience of the robot and its ability to collaborate with educators.

At the preschool level, emerging technologies are often geared toward improving social integration and engaging the children in constructive learning, meaning the learner is actively involved in knowledge construction. The technologies are incorporated into storytelling in the classroom, as storytelling is essential for children's language and creative development. In this setting, the robot would act as a storyteller to the children. Adopting HRI into storytelling has demonstrated a positive impact on the children's enjoyment and engagement. It has also shown positive results in rehabilitation, learning English, and creativity enhancement [15]. HRI is additionally adaptable to different educational environments, such as a playground or schoolyard, which gives the children further room to learn and grow.

At the elementary school demographic, robots have taken the role of tutors in the area of language learning. In one study, 10- to 11-year-olds were formulated with the task of learning an artificial language. The robots taught the children a 30-minute introductory lesson, and ideally, the students would be able to form simple sentences after the lesson. The sociability of the robot was demonstrated to be a crucial factor in terms of both engagement and performance. The students had stronger engagement and performances when the robot showed role model behavior, personal feedback, empathy, and communicativeness [16]. These findings further support the importance of considering sociability when designing a robot for tutoring purposes.

Personalization has been a recent subject of interest when designing a robot for the educational setting. The extent that tailoring to an individual's strengths and weaknesses is beneficial to that individual's learning, is not fully understood. This subject matter was investigated in a study where undergraduate and graduate student participants were tutored by a robot in solving grid-based logic puzzles. Participants received lessons from both personalized and non-personalized robots. The findings supported that even relatively simple personalization shows significant learning benefits, as personalization led to stronger performance and faster speed of solving the puzzle [17]. This demonstrates that personalization and adaptability is other important quality to keep in mind when designing the most optimal robot for learning. Additionally, it makes clear that HRI is a beneficial approach for post-secondary students and not just for younger students.

## **8. Personal and societal applications**

HRI has emerged in society working with people in airports, shopping malls, and care centers. With robots entering public spaces more often, this comes with the responsibility of having to maintain a positive image and appearance, as well as behavior that reflects well on society. It is critical that robots for these settings are designed to be accepting of all people and not promote gender stereotypes or ageist views. Robots in public settings have the capacity not only to be respectful to those being helped and not show social biases, but they can also go as far as to be an example and advocate for positive social change. They can bring about a positive impact on a wide range of issues in society such as homelessness, poverty, and refugee crises. To develop robots that represent social empowerment, it is important to consider how robots are shaped as part of society's socio-political dynamics.

Airports are one area where the incorporation of HRI can be particularly helpful and improve passenger experience in the setting. Airports are often overwhelming for passengers due to the large crowds, frequent announcements, and confusing screens and signs. The atmosphere of the airport setting should be considered when designing a robot to fulfill the needs of the passengers. A large robot that can communicate using nonverbal gestures is favorable because airports are crowded and noisy and it is important for the robot to be easily accessible and understood despite the surrounding noise. It is also important for the robot to be able to accommodate the hearing impaired, which could be done by having a display space showing text and images. Additional factors to look out for include affordability, range of dynamic motion, and suitability for the particular environment [18].

Retail is a separate area where HRI can benefit customers. HRI can improve service quality by helping customers navigate a store to find products and information, receive personalized guidance on products, order online for delivery or pickup, and complete purchase transactions. As HRI makes it easier for customers to shop, this in turn increases sales, reduces labor costs, and provides an engaging retail experience. Robots in retail additionally have advantages over human staff, as this approach minimizes human error and allows for more rapid service processes. Human staff often experience physical fatigue and mental strain when performing service tasks, and their work experience includes training time and downtime, which takes away from the opportunity for productive sales. Humanoid robots, notably, can mimic human communication and social interactions, and this makes them strong candidates for integration into retail settings. When designing a robot for this setting, it is critical to consider the robot's emotional aspect for an optimal customer experience. The use of HRI in public environments is promising as a means to improve personal experience and have a positive socio-political impact on society as a whole.

## **9. Challenges in HRI**

It is difficult to design a robot that allows for accurate interaction and communication. There is still work to be done to look for creative ways to improve the capacity of robots in understanding human actions and responding appropriately. With the ability to recognize human hand gestures, there is still room for error due to the complexity and high degree of freedom of human hands. More effective robots should combine multi-modal features, and be able to recognize posture, facial expressions, and voice intensity. This comes along with developing more complicated and

powerful sensors, which further makes equipping the robot difficult. Additionally, for optimal interaction, robots need a mechanism to foresee and predict upcoming actions. The complexity and inconsistency of human actions make designing this mechanism a challenge. In HRI, robots also need to be sensitive to surroundings, as well as clutter, lighting changes, and depth perception. It is important and at the same time difficult to consider all of these factors together.

HRI not only has certain design challenges, but it also has some ethical parameters. It is important to keep in mind both helpful and harmful behavior with regard to robots and robotic assistance. The use of robots for killing activities in warfare, for sexual pleasure, or to care for emotionally unstable target groups is a particularly sensitive subject matter. Robots also have the potential to make humans less motivated to work, or unwilling or unable to fulfill certain tasks, even simple ones. There are multiple perspectives on robot rights, treating robots respectfully, and if ethics even apply to the robot itself altogether. An additional ethical issue is who regulates robot use, and who is held responsible if a robot causes damage to a human or property. This also brings into question who is responsible for robot malfunctions as well as the proper way to dispose of robots. HRI also has privacy issues, as the process of consenting to give out personal information to the robot is not concrete. Another potential issue is the robot's physical appearance if it is inadvertently built to match any biases of the designer or embody discrimination through having Euro-centric or overly feminized features. HRI has many ethical issues that are important to take into perspective and find ways to avoid possible harm to robots or users.

## **10. Conclusion**

HRI is evidently a promising modern approach with great benefits in both home and work sites. Some of these benefits include providing engagement, accuracy, productivity, and adaptability. Collaborative robots, humanoid robots, and telerobots all have endless possibilities, and there is still improvement space to further explore the promising potential that these technologies offer. When designing a robot for optimal performance, there are many important factors to keep in mind including physical appearance, behavioral traits, and suitability for a particular setting. Emerging technologies, including simulation systems and virtual displays, are helpful in testing and improving a robot's capabilities and preparing for integration. In the subsequent chapters, this book will discuss modern HRI applications in multiple aspects and will touch upon different perspectives and experimental methodologies to develop HRI environments. Emerging technological advancements in HRI and the strong evidence of the incredible benefits make HRI an excellent approach in everyday settings with even more exciting growth to come.

*Introductory Chapter: Human-Robot Interaction – Advances and Applications DOI: http://dx.doi.org/10.5772/intechopen.109343*

## **Author details**

Helen Meyerson, Parthan Olikkal, Dingyi Pei and Ramana Vinjamuri\* Vinjamuri Lab, University of Maryland Baltimore County, Baltimore, MD, USA

\*Address all correspondence to: ramana.vinamuri@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Young JE, Sung J, Voida A, Sharlin E, Igarashi T, Christensen HI, et al. Evaluating human-robot interaction. International Journal of Social Robotics. 2011;**3**(1):53-67

[2] Landi CT, Ferraguti F, Costi S, Bonfè M, Secchi C. Safety barrier functions for human-robot interaction with industrial manipulators. In: 2019 18th European Control Conference (ECC); 2019. pp. 2565-2570. DOI: 10.23919/ECC.2019.8796235

[3] Kragic D, Gustafson J, Karaoguz H, Jensfelt P, Krug R. Interactive, collaborative robots: Challenges and opportunities. International Joint Conference on Artificial Intelligence IJCAI-18. 2018. pp. 18-25. DOI: 10.24963/ijcai.2018/3

[4] Austermann A, Yamada S, Funakoshi K, Nakano M. How do users interact with a pet-robot and a humanoid. In: CHI'10 Extended Abstracts on Human Factors in Computing Systems Systems (CHI EA '10). New York, NY, USA: Association for Computing Machinery; 2010. pp. 3727-3732. DOI: 10.1145/1753846.1754046

[5] Sheridan TB. Human–robot interaction: Status and challenges. Human Factors. 2016;**58**(4):525-532

[6] Arora A, Panda SN, Raheja J, Nagpal D. Development Approaches To Intuitive, SSD & Haptics Integrated HRI & Social HRI systems for Assisting Space Exploration. In: 2021 6th International Conference on Innovative Technology in Intelligent System and Industrial Applications (CITISIA); 2021. pp. 1-7. DOI: 10.1109/ CITISIA53721.2021.9719922

[7] Luebbers M, Chang C, Tabrez A, Dixon J, Hayes B. Emerging Autonomy Solutions for Human and Robotic Deep Space Exploration. In: Proceedings of Space CHI: Human-Computer Interaction for Space Exploration (SpaceCHI 2021). Yokohama, Japan. 2021

[8] Demir M, McNeese NJ, Cooke NJ, Ball JT, Myers C, Frieman M. Synthetic teammate communication and coordination with humans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Los Angeles, CA: SAGE Publications; 2015

[9] Cosenzo KA, Barnes MJ. Humanrobot interaction research for current and future military applications: From the laboratory to the field. In: Proc. SPIE 7692, Unmanned Systems Technology XII. Vol. 7692. 7 May 2010. DOI: 10.1117/12.850038

[10] Broadbent E, Kuo IH, Lee YI, Rabindran J, Kerse N, Stafford R, et al. Attitudes and reactions to a healthcare robot. Telemedicine and e-Health. 2010;**16**(5):608-613

[11] Broadbent E, Tamagawa R, Patience A, Knock B, Kerse N, Day K, et al. Attitudes towards health-care robots in a retirement village. Australasian Journal on Ageing. 2012;**31**(2):115-120

[12] Zhao Z, Ma Y, Mushtaq A, Rajper AMA, Shehab M, Heybourne A, et al. Applications of robotics, artificial intelligence, and digital technologies during COVID-19: A review. Disaster Medicine and Public Health Preparedness. 2021;**2021**:1-11

[13] Bortot D, Ding H, Antonopolous A, Bengler K. Human motion behavior while interacting with an industrial

*Introductory Chapter: Human-Robot Interaction – Advances and Applications DOI: http://dx.doi.org/10.5772/intechopen.109343*

robot. Work. 2012;**41**(Supplement. 1):1699-1707

[14] Schönfuß B, McFarlane D, Athanassopoulou N, Salter L, Silva LD, Ratchev S. Prioritising low cost digital solutions required by manufacturing SMEs: A shoestring approach. In: International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing. Cham: Springer; 2019. pp. 290-300

[15] Fridin M. Storytelling by a kindergarten social assistive robot: A tool for constructive learning in preschool education. Computers & Education. 2014;**70**:53-64

[16] Saerbeck M, Schut T, Bartneck C, Janse MD. Expressive robots in education: Varying the degree of social supportive behavior of a robotic tutor. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). New York, USA: Association for Computing Machinery; 2010. pp. 1613-1622. DOI: 10.1145/1753326.1753567

[17] Leyzberg D, Spaulding S, Scassellati B. Personalizing robot tutors to individuals' learning differences. In: 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Bielefeld, Germany. 2014. pp. 423-430

[18] Tonkin M, Vitale J, Herse S, Williams MA, Judge W, Wang X. Design methodology for the UX of HRI: A field study of a commercial social robot at an airport. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI '18). New York, USA: Association for Computing Machinery; 2018. pp. 407- 415. DOI: 10.1145/3171221.3171270

## **Chapter 2**

## EEG Control of a Robotic Wheelchair

*Ashok Kumar Chaudhary, Vinay Gupta, Kumar Gaurav, Tharun Kumar Reddy and Laxmidhar Behera*

## **Abstract**

The Brain-Computer Interface (BCI) technology has been widely used in clinical research; however, its adoption in consumer devices has been hindered by high costs, poor reliability and limited autonomy. In this study, we introduce a low-cost, opensource hardware-based, consumer-grade product that brings BCI technologies closer to the elderly and motor-impaired individuals. Specifically, we developed an autonomous motorized wheelchair with BCI-based input capabilities. The system employs the ROS-backend navigation stack, which integrates RTAB-MAP for mapping, localization, and visual odometry, as well as A\* global and DWA local path planning algorithms for seamless indoor autonomous operations. Data acquisition is accomplished using OpenBCI 16-channel EEG sensors, while Ensemble-Subspace KNN machine learning model is utilized for intent prediction, particularly goal selection. The system offers active obstacle avoidance and mapping in all environments, while a hybrid BCI Motor Imagery based control is implemented in a known mapped environment. This prototype offers remarkable autonomy while ensuring user safety and granting unparalleled independent mobility to the motor-impaired and elderly.

**Keywords:** brain-computer interface, motor-impaired, wheelchair, ensemble-subspace KNN

## **1. Introduction**

The development of Brain-computer Interface (BCI) technology has led to a wide range of scientific and practical applications since its inception in the 1970s. One of the key areas of focus for BCI technology is in the field of wheelchair systems, where the ease of use and efficiency for the user is of paramount importance. This means that the system should be designed in such a way that it is simple for the user to operate the wheelchair and achieve their desired objectives. In particular, EEG-based brain-computer interfaces (BCI) are particularly well-suited for this application as they offer a high degree of convenience and efficiency for the user. However, previous wheelchair systems based on the P300 [1, 2] and those based on steady state visual evoked potentials (SSVEP) [3] did not provide the user with the same level of convenience because they required them to continuously watch a screen in order to decipher commands from EEG signals. The user's field of vision is restricted and fixed on

the BCI feedback, making it difficult for them to handle some situations. A wheelchair system based on motor imagery has been suggested in this situation [4, 5]. One of the wheelchair systems has been introduced with fixed direction steering of the wheelchair [6]. But had several shortcomings i.e. one of which was the design approach. It was a non-autonomous and requires constant attention. It does not have synchronous localization and mapping (SLAM) feature. It suffers low information transfer rate. So, to overcome the above shortcoming LiDAR based SLAM was introduced [7]. It is based on steady state visually evoked potentials (SSVEP). But the shortcomings were with shared autonomy and also suffers low information transfer rate and cost ineffective too. A new wheelchair system utilizing motor imagery [8]. was recently introduced, offering a significant enhancement in functionality compared to previous models. However, even more advanced technology was later developed, incorporating both motor imagery and P300 technology to provide even greater enhancement [9] in performance. This system offers the added feature of laser range finder and encoderbased localization, as well as autonomous capabilities. Despite these advancements, the system still lacks the ability to perform inferior 2D mapping and localization, and is cost-ineffective. Additionally, a SSVEP-based direction and angle control system for wheelchair design [10] has been proposed, utilizing visual landmarks for feedback. However, this system is not fully autonomous and also lacks the ability to perform inferior mapping and localization techniques. Additionally, it is cost-ineffective due to its low information transfer rate. Overall, while there have been significant advancements in wheelchair technology, there is still room for improvement in terms of autonomy and cost-effectiveness. A wheelchair based on eye-blink steering control is proposed [11]. An innovative approach but does not solve the issue natural eyeblink signals. Since it is non- autonomous, higher magnitude natural eyeblink signals leads to undesirable motor control of the wheelchair. To overcome this, eye-blinking method is integrated with electroencephalogram (EEG) to control wheelchair movement [12], but sometimes it becomes very difficult and for a person to continuously blink their eyes. As the technology progress, a wheelchair was designed to drive in four directions [13]. It was mainly proposed using time-frequency domain analysis of EEG signals using Neural Network. It is only a proposed prototype, not a full-size wheelchair and not suitable for actual real time implementation. Then a recent wheelchair is introduced based on computer vision-based navigation [14]. It uses tags to localize itself incorporating computer vision, but suffers low maneuverability and unable to handle dynamics obstacles accurately in real time. An omni-directional moveable wheelchair design is proposed based on Mecanum wheel [15]. It is based on SSVEP and Alpha-wave based asynchronous control but the major disadvantage is non-autonomous and requires constant attention for the control mechanism.

The literature reviewed above presents a comprehensive comparison of the various approaches that have been proposed for brain-controlled robotic wheelchairs. It is evident that most of the earlier methods relied heavily on Motor Imagery or P300 for classifying navigation commands. A significant number of researchers employed SSVEP to select from a fixed set of predefined commands. Additionally, some studies captured EEG signals corresponding to eye blinks for prediction purposes. However, when it comes to the chair's autonomy, most early researchers attempted to limit themselves to a fixed set of controls, while others attempted fully autonomous navigation. However, the techniques they used were not as advanced as those available today, making them unsuitable for current use. The proposed method, on the other hand, has successfully overcome the challenges of signal acquisition, goal prediction, mapping, localization, and autonomous navigation. In conclusion, the proposed

## *EEG Control of a Robotic Wheelchair DOI: http://dx.doi.org/10.5772/intechopen.110679*

method is far superior to earlier approaches in terms of its ability to effectively control a brain-controlled robotic wheelchair.

The proposed method for design, development of a bio-signal enabled robotic wheelchair for motor-disabled and elderly care includes following novelties over other existing methods.


## **2. Proposed methodology for bio-signal enabled control system**

To achieve our objective, we have devised a two-part solution. The first part involves acquiring the goal state through a GUI-based display module (**Figure 1**).

**Figure 1.**

*Graphical user interface (GUI) of wheelchair display.*

This module displays a map obtained from the working memory and provides the user with various goal point options. For instance, in a hospital scenario, these options may include patient rooms, gardens, toilets, cafeterias, etc. The user selects a goal point, which triggers a neural depolarization pattern within the cortex. We use a 16-channel OpenBCI cap to record this pattern. By analyzing the time-series data generated by this pattern, we use Motor Imagery (MI)-based prediction and machine learning to estimate the probability of reaching the selected goal.

The navigation module is responsible for devising a trajectory towards the target state while ensuring the safety of the user and avoiding any potential static or dynamic obstacles. Initially, the module focuses on obtaining drift-free odometry by employing a multi-sensor system, including LiDAR, depth camera, IMU, and wheel odometry data that pass through an Extended Kalman Filter (EKF) to ensure accurate odometry. Subsequently, RTAB-Map maintains a live map within the Odom frame of the robot to enable obstacle avoidance via the local planner. The global planner formulates an optimal path from the start state to the goal state based on the static map. Subsequently, the local planner leverages this live map and the navigation waypoints to maneuver in the immediate vicinity while considering the kinematic constraints of the wheelchair, preserving a pre-set safety buffer, and accounting for the motion of moving obstacles. The local planner constantly updates the path to the goal, adhering to imposed environmental and kinematic constraints to ensure complete safety for the occupant until they reach their destination. **Figure 2** shows all the components for bio-signal enabled control system.

### **2.1 Hardware control module (HCM)**

The hardware infrastructure of a brain-computer interface (BCI) wheelchair is a critical determinant of its operational and practical efficiency. It encompasses the sensors and electrodes that capture brain signals, the control mechanism that deciphers these signals and translates them into motion instructions, and the actuators that execute the movement of the wheelchair. The absence of a properly functioning hardware infrastructure could impede the BCI wheelchair's ability to respond to user input and hinder its mobility capabilities. Furthermore, the hardware infrastructure must be robust and dependable to guarantee user safety and wellbeing. Thus, it is

**Figure 2.** *Block diagram for bio-signal enabled (BSE) control system.* *EEG Control of a Robotic Wheelchair DOI: http://dx.doi.org/10.5772/intechopen.110679*

essential to utilize high-quality components and to regularly service and upgrade the hardware infrastructure to maintain its ongoing functionality.

## *2.1.1 Frame dimension and sensor*

The wheelchair has a frame made of iron, with a dimension of 1146493.5 cm. It is equipped with two motors and a differential drive system, with a wheelbase of 13 inches in diameter. The maximum weight it can carry is 100 kilograms, and it weighs 45 kilograms without any sensors. The wheelchair has a battery capacity of 24 V, and it can travel up to 20 km on a full charge. Overall, these hardware specifications make the wheelchair sturdy, reliable, and suitable for providing assistance to individuals with mobility impairments. The wheelchair has been modified in order to house the following sensors shown in **Figure 3**.

1.**LiDAR Sensor:** A 2D Light Detection and Ranging (LiDAR) sensor, **Slamtec RPLIDARA1** is being used to get the approximate location information of static and dynamic obstacles with respect to the wheelchair. It has a range of 12 m with a depth resolution of 0.1 mm. The sampling frequency is 4000 Hz at normal mode and 8000 Hz at boost mode. The horizontal field of view is 360°.

**Figure 3.**

*Various electronic sensors: (a) LiDAR, (b) RGB-D camera, (c) IMU (d) encoder and (e) EEG headset.*

	- **Motors:** We are using two Robodo MY1016ZL 250 W PMDC motors as the primary drive motors for the wheelchair. Its operating voltage is 24 V, rated torque is 12.7 Nm, and rated speed is 120 RPM.
	- **Microcontroller:** Arduino Mega is used as a microcontroller board to control motors and get encoder data. It uses a powerful and power-efficient 8-bit chip, ATmega2560, capable of running instruction at 16 MHz. It has 100 GPIO pins, 4- UART, 5-SPI, *I* 2 *C* for interfacing various sensors and actuators, connected to the network through USB-Serial.
	- **Motor Driver:** To control the speed and the direction of the brushed DC motors, a motor driver of the appropriate specification is needed. We are using Cytron's MDDS30 Smart Drive, which accepts the PWM control signals from the microcontroller. It is rated for motors having a rated current of 30 A, the peak current of 80 A with operating voltage from 7-35VDC. It also comes with regenerative braking technology to charge the battery when one applies the brake.
	- **Power Distribution Module:** Different sensors and peripherals need different voltage values and have additional current requirements. Jetson Nano works on 5 V with a peak current of 4 A, while Intel NUC requires 19.5 V. We developed a power distribution module that can provide the stable and required power for all the components to facilitate this power distribution module that can provide the stable and required power for all the components to facilitate this.
	- **Display:** To output different destination goals and feedback, we incorporated a 7-inch HDMI touch display connected to the NUC.

**Figure 4.** *Various electronics component placement.*

The electronics components depicted in **Figure 4** include state-of-the-art sensors and controllers that are designed to respond to the unique needs of the elderly disabled population. These devices work together seamlessly to provide the necessary support and assistance required for a range of mobility tasks.

### **2.2 Communication control module (CCM)**

The system has been designed with a modular approach to actively distribute workloads among sensors, actuators, display devices, and compute modules. This design not only makes the system easy to troubleshoot and repair, but also allows for efficient communication among these components. The **Figure 5** provided illustrates the overall sensor interface and network communication infrastructure that has been implemented. Furthermore, the modular design of the system allows for effective load balancing, making it more reliable and efficient. Overall, the system is designed in a way to make it simple to debug and repair, and the sensor interface and network communication infrastructure are clearly illustrated in the given **Figure 5**, allowing for easy understanding of how the different components communicate with one another. The display flashes probable destinations where the wheelchair can navigate. The destination goal is captured from the brain by the BCI headset. It transfers this information to the Intel NUC wirelessly through Bluetooth, where path planning algorithms use it to plan the path. The feedback from sensor data fusion from the camera attached to NUC, LiDAR and IMU hooked to Jetson Nano 2 and encoders attached to Jetson Nano 1 are used by the system to generate forward and angular velocity for the wheelchair. These velocities are sent to the microcontroller connected to Jetson Nano 1 via serial interface, converting to individual wheel velocities set by the motor driver. All three computing devices are connected through a router via ethernet. **Figure 5** shows the sensor interface and network communication module.

**Figure 5.**

*Sensor interface and network communication.*

## **3. Movement control system**

The motor control design for a brain control wheelchair involves using signals from the brain to control the movement of the wheelchair. It includes safety features such as obstacle detection and emergency stop mechanisms. The control algorithms used to interpret the brain signals and control the motors are crucial for smooth and responsive movement. The design also focuses on making the wheelchair easy to operate and user-friendly.

#### **3.1 Wheelchair movement analysis**

The robot has two motor controlled wheels at the back and two castors in the front. The two motor controlled wheels control the kinematics (**Figure 6**) of the chair which is implemented using differential drive kinematics. For defining this we take into account several variables. Where, *ICC*: Instantaneous centre of curvature; *R*: Radius of curvature; *vl* : Velocity of left wheel; *vr* : Velocity of right wheel; *vf* : Linear velocity of the bot; *wl* : Angular velocity of left wheel; *wr* : Angular velocity of right wheel; *w*: Angular velocity of the bot; *Dw* : Distance between the left and right wheel; *dw* : Radius of the wheels.

$$v\_f = \frac{v\_l + v\_r}{2} \tag{1}$$

$$w = \frac{v\_l - v\_r}{D\_w} \tag{2}$$

$$R = \frac{v\_l + v\_r}{2(v\_l - v\_r)}\tag{3}$$

$$
\boldsymbol{w}\_l = \boldsymbol{d}\_w \ast \boldsymbol{w}\_l \tag{4}
$$

$$w\_r = d\_w \* w\_r \tag{5}$$

#### **3.2 PWM based control mechanism**

Once the wheel angular velocities have been determined, motor control is achieved using our control system, which consists of a microcontroller connected to one of the Jetson Nano processors via USB. The microcontroller communicates with the motor drivers using Pulse Width Modulation (PWM) signals. They have a PWM pin and an enable pin, the first determines motor speed, while the other is used to determine motor direction. Based on the input from these two pins, delivered via a microcontroller, the motor driver regulates current delivered to the motor from the main power supply battery. The frequency and width of the PWM pulses determine the speed of rotation of the motors. Here, *vmax* and *vmin* have been defined for safety as the maximum and minimum velocities for normal operation.

$$w\_l = \frac{2\pi P\_l}{P\_n} \tag{6}$$

*Pl*� Number of pulses sent to the left wheel in 1 sec. *Pm*� Number of pulses needed for one complete revolution of the wheel.

#### **3.3 Velocity communication**

The computer publishes the reference speed commands as wist message on the *cmdvel* topic. Then, the microcontroller reads the *cmdvel* from usb serial and generates the PWM signals for motor control. The required linear velocity *vf* is in *msg.linear.x* and the required angular velocity *w* is in *msg.angular.z*. Now solving the Eqs. (1) and (2) the required individual velocities of the wheels are found and then from six the required PWM values are calculated.

## **4. Autonomous navigation**

Simultaneous localization and map building (SLAM) [16] and path-planning are at the core of any autonomous or assisted system. In order to provide for reliable locomotion in a dynamic environment upto a determined goal, we have designed a robust navigation stack. The primary step in navigation is determination of the environment. This is followed by pinpointing ego position within the realized environment - and its update as it relocates within the environment. Any dynamic obstacles must be detected in this process and added to the point cloud. Finally, using the determined map, the procured goal and a constantly updated laser scan, we can navigate as required.

## **4.1 Various SLAM approaches**

We can also objectively evaluate functionality of common slam algorithms on the basis of input/output capabilities. **Figure 7** shows comparison of various SLAM algorithms.



**Figure 7.**

*Comparison of various SLAM algorithms.*

• **RGBD-SLAMv2** [21]: It is another visual SLAM implementation that can handle full occlusion and white noise added to visual data-stream. It compensates loss of data in such situations by multi-sensor fusion with IMU data, implemented using robot localization package in ROS.

## **5. Integration of BCI with wheel chair**

Goal is selected from the display GUI (**Figure 1**) using BCI module. Once the BCI module gives the final prediction of goal, the autonomous path planning modules kicks in. The Global planner, A\* generates an efficient path according to the goal, and the DWA local planner ensures any dynamic change in environment and obstacle avoidance. Motion control system sets the velocities of wheels and odometry from the sensor fusion and localisation give feedback on pose and velocities. This loop continues until the wheelchair reaches its destination. If there is any fallback in any of the modules, recovery behaviors take control for the user's safety. After reaching the destination goal, it waits for the next goal. **Figure 8** shows the BCI integration with wheel chair.

## **5.1 Brain-computer interface (BCI)**

Brain-Computer Interface (BCI) creates an interface between our brain and the computer. We are able to do this because we get different and differentiable signals for every task we do. We analyze these signals and translate them into commands that are sent to an output device to perform a desired action. In our case, we are using these signals to drive a wheelchair.

**Figure 8.** *BCI integration with wheel chair.*

## **5.2 Motor imagery (MI)**

MI is one of the standard techniques in BCI in which user is asked to imagine motor action like raising left hand or right limb without actually performing that action. This translates to potential drop that is captured by a EEG Headset. When a person imagining process, event related synchronization and de-synchronization occurs which lie in the frequency range between Mu/Alpha (8 12 Hz) and Beta (16 25 Hz) [22].

## **5.3 Collection and initial processing of bio-signal recordings**

We take the signals from a 16-channel OPENBCI Ultracortex Mark-4 EEG headset (**Figure 3e**), which takes signals from the brain and wirelessly sends them to the computing device using Bluetooth. The plavement of electrodes is very important for motor imagery applications. Standard 10–20 electrode placement can't be found in the **Figure 9**. We have exploited C3, C4, CP1 and CP2 because they provide better signals for motor imagery applications.

Once we get the raw data, we filter it to get away with noise and artifacts. Artifacts are needed to be removed before feeding this data for feature extraction otherwise they interfare with signal of interest which decreases the Signal to Noise Ratio (SNR). Some of them are [23]:


**Figure 9.** *EEG electrode placement [*Source: *commons.wikimedia.org].*

### *EEG Control of a Robotic Wheelchair DOI: http://dx.doi.org/10.5772/intechopen.110679*

The frequency required for our operation is from 8 to 50 Hz, so we put a fifthorder bandpass butterworth filter of 4 to 60 Hz to remove unwanted frequencies. This bandpass filter also removes some of the low-order frequencies which arise due to head and body movements.

### **5.4 Data collection sequence**

For Motor Imagery-based data collection, we divided the training sequence into four classes: left hand, right hand, left feet, and right feet (**Figure 10**). A blank screen was followed by one of the four sequences flashed on display. EEG data of 5 seconds each were collected, of which we trim 30 sec from start and end and divided into four parts. Slicing allows us to collect data efficiently.

## **5.5 Feature extraction**

Once we have filtered data, we operate on each point and extract features for our machine learning model to classify them into different classes. Features that we have used are as follows:

• Mean,

$$\overline{\mathfrak{X}} = \frac{\sum\_{i=1}^{N} \mathfrak{x}\_i}{N} \tag{7}$$

Median, med(x) Root Mean Square,

$$\text{RMS}(\mathbf{x}) = \sqrt{\frac{1}{N} \sum\_{i=1}^{N} \mathbf{x}\_i^2} \tag{8}$$

Variance,

$$\text{Vax}(\mathbf{x}) = \frac{\sum\_{i=1}^{N} (\mathbf{x}\_i - \overline{\mathbf{x}})^2}{N - 1} \tag{9}$$

Skewness,

$$\text{Skew}(\mathbf{x}) = \frac{\sum\_{i=1}^{N} (\mathbf{x}\_i - \overline{\mathbf{x}})^3}{(N-1) \* \sigma^3} \tag{10}$$

where *σ* is standard deviation.

**Figure 10.** *Motor imagery training sequence.*

Kurtosis,

$$\text{kurt}(\mathbf{x}) = E\left[ \left( \frac{\mathbf{X} - \mu}{\sigma} \right)^{4} \right] \tag{11}$$

Integral features like area under the curve and waveform length. Slope sign change,

$$\text{Slope Sign Change} = \sum\_{n=2}^{N-1} [f[(\mathbf{x}\_n - \mathbf{x}\_{n-1})\mathbf{x}(\mathbf{x}\_n - \mathbf{x}\_{n-1})]] \tag{12}$$

where,

$$f(x) = \begin{cases} \mathbf{1}, & \text{if } x \ge \text{threshold} \\ \mathbf{0}, & \text{otherwise} \end{cases}$$

Apart from these time-domain features, we have also used frequency domain features. But to transfer time-domain EEG signals to frequency domain, we perform a discrete Fast Fourier Transform (FFT) [24] on time domain signal:

$$\varkappa[k] = \sum\_{n=0}^{N-1} \varkappa[n] e^{\frac{-j2nk}{N}} \tag{13}$$

Mean frequency,

$$f\_{\text{mean}} = \frac{\sum\_{i=0}^{N} I\_i \cdot f\_i}{\sum\_{i=0}^{N} I\_i} \tag{14}$$

where, I is spectrogram intensity (in dB) Median frequency, median(f) Spectral Power Density [25]

$$\mathbf{x}\_i(n) = \mathbf{x}(n + iD), \quad n = \mathbf{0}, \mathbf{1}, \mathbf{2}, \dots, M - \mathbf{1} \tag{15}$$

while; i = 0, 1, 2, … , L-1;

$$\stackrel{\approx\_{\infty}^{(i)}}{P}(f) = \frac{1}{MU} \left| \sum\_{n=0}^{M-1} \varkappa\_i(n) w(n) e^{-j2\pi fn} \right|^2. \tag{16}$$

$$P\_{\infty}^{W} = \frac{1}{L} \sum\_{i=0}^{L-1} \overset{\approx\_{\infty}^{(i)}(f)}{P} \,. \tag{17}$$

Peak frequency, frequency corresponding to peak of spectral power distribution.

#### **5.6 Training ANS classification**

We show one of the four MI training sequence images in a training routine and ask user to imagine those. The EEG headset takes this data and wirelessly send this for

*EEG Control of a Robotic Wheelchair DOI: http://dx.doi.org/10.5772/intechopen.110679*

**Figure 11.** *Block diagram for BCI training module.*

pre-processing and feature extraction. After cleaning, the data is labeled, and the data point is added to the dataset. After collecting all the data from all subjects, we fed this dataset for training. **Figure 11** shows flow of training module.

**Figure 13.** *Block diagram for BCI target prediction module.*

We tried may classifiers like SVM with poly kernel but settled with Ensemble Subspace k-NN(k = 3) [26] with an accuracy of 91.4% on 5-fold cross validation (**Figure 12**). After the creation of the dataset and training, we run our model for real-time prediction. In the goal prediction pipeline, we continuously take raw data, store 1-sec data in a buffer, and do pre-processing and feature extraction. Feature-extracted unlabelled data is now fed to the same classifier that we use in the training stage for class(here goal) prediction. **Figure 13** shows the goal prediction module.

## **6. Results**

We have taken data of eight healthy subjects for MI in two sessions as multisession data collection improves overall real-world performance on new data. We take 100 readings of 5 seconds each, raw data for each subject in both the sessions, which on trimming 1 second and slicing the remaining in four, becomes 800 labeled data points per class. This dataset now go for training.

Accuracy of different classifiers at 5-fold cross validation is presented in **Table 1** below.


**Table 1.**

*Performance comparison of different classifier models.*

## **7. Discussion**

The development of an advanced wheelchair is a significant breakthrough in enabling independent mobility for elderly and physically impaired individuals. However, it represents only a modest step towards the creation of an empowering and intelligent assistive technology.

There are certain limitations to our approach, such as the manual addition of key points within a known map by developers or admins, which the user can then use for BCI-based control. In contrast, a newly realized map can accept goal coordinates and

orientation via manual touch entry. The main obstacle to achieving a higher level of freedom through BCI control is the intent prediction models associated with BCI. To overcome this, future research can focus on improving the granularity in motor imaging estimates to enable the selection of any point in all maps, both stored and realized, via BCI-based input.

## **8. Conclusion**

In empirical investigations using Motor Imagery with healthy volunteers, we were able to achieve significant results within the confines of our laboratory's limited navigation space. Although the trials were successful in terms of goal acquisition, it may be necessary to retrain and redevelop the goal selection pipeline based on data obtained from motor-impaired individuals. Further research in this area could be expanded to include real-world settings such as hospitals and airports, in the hopes of establishing wider acceptance for this technology in the future. With these efforts, we envision a future where Motor Imagery becomes a widely recognized tool in the realm of rehabilitation for motor-impaired individuals.

While our wheelchair is well-suited for indoor use and provides excellent user convenience, additional improvements are necessary to ensure the safety of both the user and pedestrians in outdoor environments. A more efficient suspension and braking system can be developed to address this need. The current active obstacle avoidance system incorporates an 8000-sample 2D LiDAR sensor with a detection range of up to 6 m. For outdoor environments with larger navigable spaces and sparser point clouds, the obstacle detection range would have to be significantly increased. Our detection suite is supported by an Intel realsense depth camera, which has been tested and performs efficiently even in outdoor settings.

Our fundamental objective is to accentuate the gravity of the quandary we are attempting to assuage and accentuate the relatively facile approach by which an appropriate resolution can be reached to ensure autonomous mobility for geriatric and motor-challenged individuals. We ardently contend that targeted research and cutting-edge engineering solutions, focused on these innovative methodologies, will effectively ameliorate the quality of life for this particular demographic. Thus, we aim to meticulously scrutinize and develop these avant-garde techniques to revolutionize the way we facilitate independent movement for individuals with limited mobility. Through this, we aspire to make an indelible and significant contribution to the amelioration of society at large.

## **Author details**

Ashok Kumar Chaudhary, Vinay Gupta, Kumar Gaurav, Tharun Kumar Reddy\* and Laxmidhar Behera

Department of Electronics and Communication Engineering, Indian Institute of Technology Roorkee, Roorkee, India

\*Address all correspondence to: tharun.reddy@ece.iitr.ac.in

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*EEG Control of a Robotic Wheelchair DOI: http://dx.doi.org/10.5772/intechopen.110679*

## **References**

[1] Iturrate I, Antelis JM, Kubler A, Minguez J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Transactions on Robotics. 2009;**25**(3):614-627

[2] Rebsamen B, Guan C, Zhang H, Wang C, Teo C, Ang MH, et al. A brain controlled wheelchair to navigate in familiar environments. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;**18**(6): 590-598

[3] Müller ST, Celeste WC, Bastos-Filho TF, Sarcinelli-Filho M. Brain-computer interface based on visual evoked potentials to command autonomous robotic wheelchair. Journal of Medical and Biological Engineering. 2010;**30**(6): 407-415

[4] Millán JR, Renkens F, Mourino J, Gerstner W. Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Transactions on Biomedical Engineering. 2004;**51**(6):1026-1033

[5] Choi K. Control of a vehicle with EEG signals in real-time and system evaluation. European Journal of Applied Physiology. 2012;**112**:755-766

[6] Kim KT, Carlson T, Lee SW. Design of a robotic wheelchair with a motor imagery based brain-computer interface. In: 2013 International Winter Workshop on Brain-Computer Interface (BCI). Gangwon Province, South Korea: IEEE; 2013 Feb 18. pp. 46-48

[7] Duan J, Li Z, Yang C, Xu P. Shared control of a brain-actuated intelligent wheelchair. In: Proceeding of the 11th World Congress on Intelligent Control and Automation. Shenyang, China: IEEE; 2014 Jun 29. pp. 341-346

[8] Andronicus S, Harjanto NC, Widyotriatmo A. Heuristic steady state visual evoked potential based brain computer interface system for robotic wheelchair application. In: 2015 4th International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME). Bandung, Indonesia: IEEE; 2015 Nov 2. pp. 94-97

[9] Zhang R, Li Y, Yan Y, Zhang H, Wu S, Yu T, et al. Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2015;**24**(1): 128-139

[10] Li Z, Zhao S, Duan J, Su CY, Yang C, Zhao X. Human cooperative wheelchair with brain–machine interaction based on shared control strategy. IEEE/ASME Transactions on Mechatronics. 2016; **22**(1):185-195

[11] Lahane P, Adavadkar SP, Tendulkar SV, Shah BV, Singhal S. Innovative approach to control wheelchair for disabled people using BCI. In: 2018 3rd International Conference for Convergence in Technology (I2CT). Pune, India: IEEE; 2018 Apr 6. pp. 1-5

[12] Xin L, Gao S, Tang J, Xu X. Design of a brain controlled wheelchair. In: 2018 IEEE 4th International Conference on Control Science and Systems Engineering (ICCSSE). Wuhan, China: IEEE; 2018 Aug 21. pp. 112-116

[13] Zgallai W, Brown JT, Ibrahim A, Mahmood F, Mohammad K, Khalfan M, et al. Deep learning AI application to an

EEG driven BCI smart wheelchair. In: 2019 Advances in Science and Engineering Technology International Conferences (ASET). Dubai, United Arab Emirates: IEEE; 2019 Mar 26. pp. 1-5

[14] Alkhatib R, Swaidan A, Marzouk J, Sabbah M, Berjaoui S, Diab MO. Smart autonomous wheelchair. In: 2019 3rd International Conference on Bio-Engineering for Smart Technologies (BioSMART). Paris, France: IEEE; 2019 Apr 24. pp. 1-5

[15] Nuo G, Wenwen Z, Shouyin L, Nuo G, Wenwen Z, Shouyin L. Asynchronous brain-computer interface intelligent wheelchair system based on alpha wave and SSVEP EEG signals. In: 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP). Wuxi, China: IEEE; 2019 Jul 19. pp. 611-616

[16] Dissanayake MG, Newman P, Clark S, Durrant-Whyte HF, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation. 2001;**17**(3):229-241

[17] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Transactions on Robotics. 2007; **23**(1):34-46

[18] Kohlbrecher S, Von Stryk O, Meyer J, Klingauf U. A flexible and scalable SLAM system with full 3D motion estimation. In: 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics. Kyoto, Japan: IEEE; 2011 Nov 1. pp. 155-160

[19] Hess W, Kohler D, Rapp H, Andor D. Real-time loop closure in 2D LIDAR SLAM. In: 2016 IEEE International Conference on Robotics

and Automation (ICRA). Stockholm, Sweden

[20] Mur-Artal R, Tardós JD. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics. 2017; **33**(5):1255-1262

[21] Endres F, Hess J, Sturm J, Cremers D, Burgard W. 3-D mapping with an RGB-D camera. IEEE Transactions on Robotics. 2013;**30**(1): 177-187

[22] Pfurtscheller G, Da Silva FL. Eventrelated EEG/MEG synchronization and desynchronization: Basic principles. Clinical Neurophysiology. 1999;**110**(11): 1842-1857

[23] Zhang C, Lian Y, Wang G. ARDER: An automatic EEG artifacts detection and removal system. In: 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS). Glasgow, Scotland: IEEE; 2020 Nov 23. pp. 1-2

[24] Brigham EO, Morrow RE. The fast Fourier transform. IEEE Spectrum. 1967; **4**(12):63-70

[25] Alam MN, Ibrahimy MI, Motakabber SM. Feature extraction of EEG signal by power spectral density for motor imagery based BCI. In: 2021 8th International Conference on Computer and Communication Engineering (ICCCE). Kuala Lumpur, Malaysia: IEEE; 2021 Jun 22. pp. 234-237

[26] Bavkar S, Iyer B, Deosarkar S. Rapid screening of alcoholism: An EEG based optimal channel selection approach. IEEE Access. 2019;**7**:99670-99682

## **Chapter 3**
