**2. Historical backgrounds**

Autonomous vehicle is an active area of research with rich history. The research performed in the early 1980s and 1990s on autonomous driving demonstrated the possibilities of developing vehicles that can control their movements in the complex environments [13, 14]. Initial prototypes of autonomous vehicles were limited to indoor use [15–17]. First study on visual road vehicle was performed in Mechanical Engineering Laboratory, Tsukuba, Japan, using vertical stereo cameras [18]. In the early 1980s, completely onboard autonomous vision system for vehicles was developed and deployed with digital microprocessors [13, 19]. First milestone towards the development of road vehicles with machine vision was accomplished in the late 1980s, with the fully autonomous longitudinal and lateral demonstrations of UBM's (Univer‐ sität der Bundeswehr München) test vehicle and computer vision VaMoRs on a free stretch of Autobahn over 20 km at 96 km/h [13]. These encouraging results led the inclusion of computer vision for vehicles in the European EUREKA-project "Prometheus" (1987–1994) and also sparked European car industrialists and universities for research in the field [13]. The major focus of these developments was to demonstrate which functions of the autonomous vehicles could be automated through computer vision [13, 14]. The promising results of such demon‐ strations kicked-off many initiatives [13, 20].

The Intermodal Surface Transportation Efficiency Act (ISTEA) passed the transportation authorization bill in 1991 and instructed United States Department of Transportation (USDOT) for demonstrating an autonomous vehicle and highway system by 1997 [20, 21]. The USDOT built partnerships with academia, state, local and private sectors in conducting the program, and made extraordinary progress with revolutionizing developments in vehicle safety and information systems [21]. The USDOT program also motivated US Federal Highway Admin‐ istration (FHWA) to start National Automated Highway System Consortium (NAHSC) program with different partners including California PATH, General Motors, Carnegie Mellon University, Hughes, Caltrans, Lockheed Martian, Bechtel, Parsons Brinkerhoff and Delco Electronics [20]. The Intelligent Vehicle Initiative (IVI) program was announced in 1997 and was legalized in 1998 Transportation Equity Act for the 21st Century (TEA-21), with the purpose to speed-up the development of driver-assistive systems by focusing on crash prevention rather than mitigation and vehicle-based rather than highway-based applications [22]. The IVI program resulted in a number of successful developments and deployments in the field operational tests which include different commercial applications such as lane change assist, lane departure/merge warning, adaptive cruise control, adaptive forward crash warning and vehicle stability system [22]. Commercial versions of these systems were manufactured shortly, and ever since their evolution and industrial penetration have been increasing [22].

The increasing advancements in the computational and sensing technologies have further impelled interest in the field of autonomous vehicles and developing cost-effective systems [5].

The present state of the art autonomous vehicles can sense their local environment, identify different objects, have knowledge about the evolution of their environment and can plan complex motion by obeying different rules. Many advancements have been made in the recent 1.5 decade, evidenced through different successful demonstrations and competitions. The most prominent historical series of such competitions/challenges were organized by US Department of Defence under Defence Advanced Research Projects Agency (DARPA). The competition was initially launched as DARPA Grand Challenge (DGC), and there are five such competitions held so far—first in March 2004, second in October 2005, third in November 2007, fourth from October 2012–June 2015 and fifth one from January–April 2013 [5, 23–25]. The fourth challenge, named as DARPA Robotics Challenge (DRC), was aimed at the development of semi-autonomous emergency maintenance ground robots [23, 25], and the fifth challenge, named as Fast Adaptable Next-Generation Ground Vehicle (FANG GV), was aimed at adaptive designs of the vehicles [24]. Both FANG and DRC were not aiming for self-driven/ autonomous/robotic vehicles; hence, we have not discussed them in this chapter. In the first three challenges, hundreds of autonomous vehicles from the USA and around the world participated in the competition and exhibited their different levels of versatilities. The first and second challenges were aimed to examine vehicles' ability in off-road environment. Autono‐ mous vehicles had to navigate in a desert up to 240 km at speed up to 80 km/h [5]. Only five vehicles were able to travel more than a mile, in the first competition. Out of those five vehicles the furthest travelling vehicle covered only 7.32 miles (11.78 km) [5]. None of the vehicles completed the route; hence, there was no winner, and the second challenge was scheduled for October 2005. By the second challenge, five vehicles were able to successfully complete the route, and Stanley from Stanford University, Palo Alto, California, secured the first place in the competition. The third challenge, named as DARPA Urban Challenge (DUC), was shifted to the urban area. The route involved 60 miles of travelling to be completed within 6 h, and vehicles had to obey all traffic regulations during their autonomous driving. Out of the 11 final teams, only 6 were able to successfully complete the route, and Boss from Carnegie Mellon University, Pittsburgh, Pennsylvania, secured the first place in the competition [5].

sourcing. Section 4 describes active research challenges and possible future directions in the

Autonomous vehicle is an active area of research with rich history. The research performed in the early 1980s and 1990s on autonomous driving demonstrated the possibilities of developing vehicles that can control their movements in the complex environments [13, 14]. Initial prototypes of autonomous vehicles were limited to indoor use [15–17]. First study on visual road vehicle was performed in Mechanical Engineering Laboratory, Tsukuba, Japan, using vertical stereo cameras [18]. In the early 1980s, completely onboard autonomous vision system for vehicles was developed and deployed with digital microprocessors [13, 19]. First milestone towards the development of road vehicles with machine vision was accomplished in the late 1980s, with the fully autonomous longitudinal and lateral demonstrations of UBM's (Univer‐ sität der Bundeswehr München) test vehicle and computer vision VaMoRs on a free stretch of Autobahn over 20 km at 96 km/h [13]. These encouraging results led the inclusion of computer vision for vehicles in the European EUREKA-project "Prometheus" (1987–1994) and also sparked European car industrialists and universities for research in the field [13]. The major focus of these developments was to demonstrate which functions of the autonomous vehicles could be automated through computer vision [13, 14]. The promising results of such demon‐

The Intermodal Surface Transportation Efficiency Act (ISTEA) passed the transportation authorization bill in 1991 and instructed United States Department of Transportation (USDOT) for demonstrating an autonomous vehicle and highway system by 1997 [20, 21]. The USDOT built partnerships with academia, state, local and private sectors in conducting the program, and made extraordinary progress with revolutionizing developments in vehicle safety and information systems [21]. The USDOT program also motivated US Federal Highway Admin‐ istration (FHWA) to start National Automated Highway System Consortium (NAHSC) program with different partners including California PATH, General Motors, Carnegie Mellon University, Hughes, Caltrans, Lockheed Martian, Bechtel, Parsons Brinkerhoff and Delco Electronics [20]. The Intelligent Vehicle Initiative (IVI) program was announced in 1997 and was legalized in 1998 Transportation Equity Act for the 21st Century (TEA-21), with the purpose to speed-up the development of driver-assistive systems by focusing on crash prevention rather than mitigation and vehicle-based rather than highway-based applications [22]. The IVI program resulted in a number of successful developments and deployments in the field operational tests which include different commercial applications such as lane change assist, lane departure/merge warning, adaptive cruise control, adaptive forward crash warning and vehicle stability system [22]. Commercial versions of these systems were manufactured shortly, and ever since their evolution and industrial penetration have been increasing [22].

The increasing advancements in the computational and sensing technologies have further impelled interest in the field of autonomous vehicles and developing cost-effective systems [5].

field, and conclusion appears in Section 5.

strations kicked-off many initiatives [13, 20].

**2. Historical backgrounds**

6 Autonomous Vehicle

The participating teams in the DGC competitions adopted different types of system architec‐ tures, with standalone implementations. However, on a higher level, they decomposed the system architecture in four basic subsystems, namely, sensing, perception, planning and control (see **Figure 3** for the pictorial representation) [5]. The sensing unit takes raw measure‐ ments from different on-/off-board sensors (e.g. GPS, radar, LIDAR, odometer, vision, inertial measurement unit, etc.) for perceiving the static and dynamic environment. Sensor unit passes the raw data to the perception unit, which then generates the usable information about the vehicle (e.g. pose, map relative estimations, etc.) and its environment (e.g. lanes of other vehicles, obstacles, etc.), based on provided data. The planner unit takes the usable informa‐ tion/estimations from the perception unit, reasons about the provided information and plans about the vehicle's actuations in the environment, such as path, behavioural, escalation and map planning, etc. to maximize their well-defined utility functions. Finally, the planner unit passes the ultimate information/commands to the control unit, which is responsible for actuating the vehicle.

**Figure 3.** High-level system architecture of the autonomous vehicles.

One of the main lessons learned from the DARPA challenges was the need for the autonomous vehicles to be connected, that is, the ability to interact with each other and to have access to each other's information or information about their surroundings [5]. This also provides us some idea about the importance of cloud infrastructure in accomplishing dreams of autono‐ mous vehicles. In the 2000s, cloud computing was evolving and came into existence. Amazon introduced its Elastic Compute Cloud (EC2) as a web service in 2006 [26]. Amazon EC2 was aimed to provide resizable computing capabilities in the cloud servers [26]. In 2008, NASA's OpenNebula become the first open-source software to provide private and hybrid clouds [27]. In the same year, Azure was announced by Microsoft, aiming to provide cloud computing services and was released in early 2010 [28]. In the mid-2010, OpenStack project was jointly launched by NASA and Rackspace Hosting, with the intentions to help organizations to set up cloud computing services (mostly IaaS) on their standard hardware [29]. Oracle Cloud was announced by Oracle in year 2012, with the aim to provide access to integrated set of IT solutions, including SaaS, PaaS and IaaS [30].

The importance of connecting machines in manufacturing automation systems through networking was realized 3.5 decades ago when General Motors developed Manufacturing Automation Protocol in 1980s [31]. Before the discovery of World Wide Web (WWW), different types of incompatible protocols were adopted by different vendors. In the early 1990s, the discovery of WWW promoted the Hypertext Transfer Protocol (HTTP) over Internet Protocol (IP) [32]. In 1994, the first industrial robot was integrated with WWW so that it can be teleop‐ erated by different users through graphical user interface [33]. In the mid- and late-1990s different types of robots were integrated with the web to explore robustness and interface issues that initiated study in a new field named as "Network Robotics" [34, 35]. In 1997, Inaba et al. investigated the benefits of remote computing to accomplish control of remote-brained robots [36]. The technical committee on networked robotics established the IEEE robotics and automation society in 2001 [37]. The initial focus of the society was on Internet-based teleported robots which was later on extended to different range of applications [37]. In 2006, MobEyes system was proposed, which exploits vehicular sensors networks to record surrounding environment and events for the purpose of urban monitoring. RoboEarth project was an‐ nounced in 2009, with the purpose to use WWW for robots, such that they can share their data and learn from each other [38]. In 2010, James Kuffner explained the concept of 'Remote Brained' robots (i.e. physical separation of the robotics hardware and the software) and introduced the term "Cloud Robotics" with potential applications and benefits of cloudenabled robots [7]. In the same year, the term "Internet of Things (IoT)" was introduced to exploit the network of physical things (e.g. vehicles, household appliances, buildings, etc.) that consists of sensors, software and ability for network connectivity for exchanging information with other objects [39]. Different vehicular ad-hoc networks (VANETs) were proposed in 2011, with the purpose to provide several cloud services for the next generation automotive systems [40, 41]. The term "Industry 4.0" was introduced in the same year for the fourth industrial revolution, with the purpose to use networking to follow the first three revolutions [42].

In 2012, M. Gerla discussed different design principles, issues and potential applications of the Vehicular Cloud Computing (VCC) [43]. In the same year, S. Kumar et al. proposed the Octree-based cloud-assisted design for autonomous driving of vehicles to assist them in planning their trajectories [44]. The high level system architecture can be visualized as presented in **Figure 4**. The purpose of sensing, planner and controller unit is same as ex‐ plained for **Figure 3**, with the modification that the perception unit has been merged into the planner unit, and planner unit has been divided into two sub-units namely onboard planner and planner over cloud. Both planner units can communicate with each other to exchange desired information and for vehicle to vehicle (V2V) communication. Cloud planner can generate requests to various autonomous vehicles for providing sensors' data, which is then aggregated to generate the information about the obstacles, path planning, localization and emergency control, etc. Onboard planner unit communicates with the cloud planner for planning the optimal trajectory and passes the ultimate information to the controller unit, which then actuates the vehicle as required.

**Figure 3.** High-level system architecture of the autonomous vehicles.

8 Autonomous Vehicle

solutions, including SaaS, PaaS and IaaS [30].

One of the main lessons learned from the DARPA challenges was the need for the autonomous vehicles to be connected, that is, the ability to interact with each other and to have access to each other's information or information about their surroundings [5]. This also provides us some idea about the importance of cloud infrastructure in accomplishing dreams of autono‐ mous vehicles. In the 2000s, cloud computing was evolving and came into existence. Amazon introduced its Elastic Compute Cloud (EC2) as a web service in 2006 [26]. Amazon EC2 was aimed to provide resizable computing capabilities in the cloud servers [26]. In 2008, NASA's OpenNebula become the first open-source software to provide private and hybrid clouds [27]. In the same year, Azure was announced by Microsoft, aiming to provide cloud computing services and was released in early 2010 [28]. In the mid-2010, OpenStack project was jointly launched by NASA and Rackspace Hosting, with the intentions to help organizations to set up cloud computing services (mostly IaaS) on their standard hardware [29]. Oracle Cloud was announced by Oracle in year 2012, with the aim to provide access to integrated set of IT

The importance of connecting machines in manufacturing automation systems through networking was realized 3.5 decades ago when General Motors developed Manufacturing Automation Protocol in 1980s [31]. Before the discovery of World Wide Web (WWW), different types of incompatible protocols were adopted by different vendors. In the early 1990s, the discovery of WWW promoted the Hypertext Transfer Protocol (HTTP) over Internet Protocol (IP) [32]. In 1994, the first industrial robot was integrated with WWW so that it can be teleop‐ erated by different users through graphical user interface [33]. In the mid- and late-1990s different types of robots were integrated with the web to explore robustness and interface issues that initiated study in a new field named as "Network Robotics" [34, 35]. In 1997, Inaba et al. investigated the benefits of remote computing to accomplish control of remote-brained robots [36]. The technical committee on networked robotics established the IEEE robotics and automation society in 2001 [37]. The initial focus of the society was on Internet-based teleported robots which was later on extended to different range of applications [37]. In 2006, MobEyes system was proposed, which exploits vehicular sensors networks to record surrounding environment and events for the purpose of urban monitoring. RoboEarth project was an‐ nounced in 2009, with the purpose to use WWW for robots, such that they can share their data and learn from each other [38]. In 2010, James Kuffner explained the concept of 'Remote

**Figure 4.** High-level system architecture of the cloud-assisted design of the autonomous vehicles.

In the same year 2012, the term "Industrial Internet" was introduced by General Electric, with the purpose to connect industrial equipment over network for exchanging their data [45]. In 2014, Gerla et al. investigated the vehicular cloud and deduced that it will be a core system for autonomous vehicles that will make the advancements possible [46]. In the same year, Ashutosh Saxena announced the "RoboBrain" project, with the aim to build a massive online brain for all the robots of the world from publically available internet data [47, 48]. In early 2016, HERE announced the launch of their cloud-based mapping service for autonomous vehicles, aiming to enhance automated driving features of the vehicles [49]. In February 2016, Maglaras et al. investigated the concept of Social Internet of Vehicles (SIoV), discussed its different design principles, potential applications and research issues [50].
