**2. Ethics, rights, digital ethics and ontological design**

Ethics concerns the moral principles that govern a person's behaviour or how an activity is conducted [15]. A key distinction in ethics is the distinction between that which is unethical and that which is undesirable.

**21**

*Ethical Issues in the New Digital Era: The Case of Assisting Driving*

design, construct, use and treat artificially intelligent beings.

**3. Well-being, identity, quality of life and self-efficacy**

[27]. Crucially, autonomy is central to personal identity [27].

The concept of identity has three pillars: the person, the role and the group [26]. Personal identity refers to the concept of the self which develops over time and the life-span. This includes the aggregate of characteristics by which a person is recognised by himself/herself and others, what matters to the person and their values

Primarily, moral principles apply to a person. However, moral code can also be ascribed to the behaviour of automated or intelligent systems (A/IS). Accordingly,

The Universal Declaration of human rights (1948) enshrines all persons with human rights [16]. This includes rights pertaining to dignity (Article 1), autonomy (Article 3), privacy (Article 12), and safety (Article 29) [16]. Some would argue that rights also apply to technology and artificial agents. These are referred to as 'transhuman rights' [17, 18]. To this end, the field of roboethics has emerged. Specifically, roboethics is concerned with the moral behaviour of humans as they

More broadly, 'digital ethics' or 'information ethics' deals with the impact of digital Information and Communication Technologies (ICT) on our societies and the environment at large [19]. As defined by Capurro [19], it addresses the ethical implications of things which may not yet exist, or things which may have impacts

Progress is typically defined in relation to concepts of advancement and improvement. As stated by the Organization for Economic Co-Operation and Development's (OECD) 'Being able to measure people's quality of life is fundamental when assessing the progress of societies' [20]. Future technology is shaping (and will shape) our political, social and moral existence. The application of ethics to questions concerning technology development is not new. In his seminal work 'The Question Concerning Technology', the philosopher Heidegger suggests that in asking what technology is, we ask questions about who we are [21]. In so doing, we examine the nature of existence and human autonomy [21]. Such ideas have led to the concept of 'ontological design' which focuses on the 'the relation between human beings and lifeworlds' [22]. As argued by Winograd and Flores, new technology does not simply change the task, it changes what it means to be human [22]. Put simply, we are designed by our designing and by that which we have designed [23]. The Information Technology (IT) sector is taking some leaps in relation to addressing these questions. Currently, there is a large focus on issues pertaining to well-being, data privacy and cybersecurity. In 2016, Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership (i.e. the Partnership on Artificial Intelligence to Benefit People and Society) to formulate best practices on artificial intelligence technologies [24]. Further, the IEEE Standards Association has recently articulated a desire to create technology that improves the human condition and prioritises well-being. Specifically, the 'IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems' have defined a set of core ethical principles for autonomous and intelligent systems (A/IS). As stated in 'Ethically Aligned Design (EAD1e), A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems' [25] 'for extended intelligence and automation to provably advance a specific benefit for humanity, there needs to be clear indicators of that benefit'. Further, the IEEE Global Initiative argue that 'the world's top metric of value (Gross Domestic Product) must move beyond GDP, to holistically measure how intelligent and autonomous systems can hinder or improve human well-being' [25].

*DOI: http://dx.doi.org/10.5772/intechopen.88371*

we cannot predict.

driverless cars are termed 'artificial moral agents'.

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

to maintaining a licence include vision, physical health and cognitive health [5]. Research indicates that cognitive abilities are important enabling factors for safe driving [6]. Research also indicates that adaptive strategies are essential to maintaining the normal parameters of driving safety in the face of illness and disability [7]. Age-related declines in the abilities of older adults provide certain obstacles to safe driving. A 2001 survey by the OECD found that 15% of those 65 or older had stopped driving, while an overwhelming number of those who continued to drive were very selective about when they did so [8]. In general, driving cessation has been linked to increasing age, socioeconomic factors, and declining function and health [9]. Negative effects of driving cessation on older adults' physical, mental,

cognitive, and social functioning have been extensively studied [10–12].

(6) issues pertaining to the potential negative consequences of this technology.

**2. Ethics, rights, digital ethics and ontological design**

which is unethical and that which is undesirable.

In relation to (6), this concerns the social consequences of this technology and the potential impact on older adult identity and well-being. The future is indeed unknown. The advancement of new driving solutions raises overarching questions in relation to the values of society and how we design technology to: (a) promote positive values around ageing and enhancing ageing experience, (b) protect human rights, (c) ensure human benefit and (d) prioritise well-being. Specifically, it raises fundamental questions in relation to the value we place on promoting autonomy and social participation for older adults and optimising quality of life/well-being. The public opinion on self-driving cars (including solutions for older adults) will determine the extent to which people will purchase and accept such systems [14]. We should not proceed with this technology just because it is available. Critically, designers must carefully consider the human dimensions of this technology and its social implications. To this end, this chapter reviews the relevant ethical considerations in relation to assisted driving solutions. Further, it presents a new ethically aligned system concept for driver assistance. In so doing, it addresses the philosophical principles that underlie the proposed driving system concept, and specifically, the role of the person.

Ethics concerns the moral principles that govern a person's behaviour or how an activity is conducted [15]. A key distinction in ethics is the distinction between that

Many automotive companies are developing and/or testing driverless cars. Largely, the proposed solutions follow established automation models such as the six levels of automation as defined by NHTSA [13]. Driver assistance technology presents a potential solution to problems pertaining to driver persistence and the management of fitness to drive issues in older adults. As this technology is not fully implemented and in use by the public, it is very difficult to both predict and assess its potential ethical implications and impact. Should the purpose of these systems go beyond safety? Is full automation an appropriate solution to effectively managing the apparent conflict between two goals—(1) promoting driver persistence and (2) ensuring road safety? That is, is it appropriate to enable an older driver to continue driving, even if there is a risk of a serious accident given their medical background? With crashes also comes the question of liability. Currently, lawmakers are considering who is liable when an autonomous car is involved in an accident. Such discussions raise many complex legal and ethical questions. Largely, the literature around ethics and driverless cars appears to focus on issues pertaining to (1) addressing conflict dilemmas on the road (machine ethics), (2) privacy and (3) minimising technology misuse/cybersecurity risks. These are indeed important ethical issues. However, the literature and public debate tends to avoid other serious ethical issues—specifically, issues concerning (1) the intended use and purpose of this technology, (5) the role of the person/driver (including older adult drivers) and

**20**

Primarily, moral principles apply to a person. However, moral code can also be ascribed to the behaviour of automated or intelligent systems (A/IS). Accordingly, driverless cars are termed 'artificial moral agents'.

The Universal Declaration of human rights (1948) enshrines all persons with human rights [16]. This includes rights pertaining to dignity (Article 1), autonomy (Article 3), privacy (Article 12), and safety (Article 29) [16]. Some would argue that rights also apply to technology and artificial agents. These are referred to as 'transhuman rights' [17, 18]. To this end, the field of roboethics has emerged. Specifically, roboethics is concerned with the moral behaviour of humans as they design, construct, use and treat artificially intelligent beings.

More broadly, 'digital ethics' or 'information ethics' deals with the impact of digital Information and Communication Technologies (ICT) on our societies and the environment at large [19]. As defined by Capurro [19], it addresses the ethical implications of things which may not yet exist, or things which may have impacts we cannot predict.

Progress is typically defined in relation to concepts of advancement and improvement. As stated by the Organization for Economic Co-Operation and Development's (OECD) 'Being able to measure people's quality of life is fundamental when assessing the progress of societies' [20]. Future technology is shaping (and will shape) our political, social and moral existence. The application of ethics to questions concerning technology development is not new. In his seminal work 'The Question Concerning Technology', the philosopher Heidegger suggests that in asking what technology is, we ask questions about who we are [21]. In so doing, we examine the nature of existence and human autonomy [21]. Such ideas have led to the concept of 'ontological design' which focuses on the 'the relation between human beings and lifeworlds' [22]. As argued by Winograd and Flores, new technology does not simply change the task, it changes what it means to be human [22]. Put simply, we are designed by our designing and by that which we have designed [23].

The Information Technology (IT) sector is taking some leaps in relation to addressing these questions. Currently, there is a large focus on issues pertaining to well-being, data privacy and cybersecurity. In 2016, Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership (i.e. the Partnership on Artificial Intelligence to Benefit People and Society) to formulate best practices on artificial intelligence technologies [24]. Further, the IEEE Standards Association has recently articulated a desire to create technology that improves the human condition and prioritises well-being. Specifically, the 'IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems' have defined a set of core ethical principles for autonomous and intelligent systems (A/IS). As stated in 'Ethically Aligned Design (EAD1e), A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems' [25] 'for extended intelligence and automation to provably advance a specific benefit for humanity, there needs to be clear indicators of that benefit'. Further, the IEEE Global Initiative argue that 'the world's top metric of value (Gross Domestic Product) must move beyond GDP, to holistically measure how intelligent and autonomous systems can hinder or improve human well-being' [25].
