*9.2.7 A learning system will enable driver persistence and a positive driver experience*

Underpinning the system logic, is a vision of the co-pilot as a learning system. Arguably, a human-centric design philosophy necessitates continuous learning on the behalf of the co-pilot (i.e. including AI/machine learning). If the co-pilot can

learn about those situations and tasks that prove challenging and/or stressful for the older adult driver (i.e. driving in traffic, poor visibility, changing lanes, parking and so forth, etc.), then it can truly tailor the task support that it provides to the driver. This tailored task support is predictive/intelligent, ensuring that the driver persists in challenging driving situations, while also enjoying their drive.

### **9.3 Technology and the conceptualization of the driver**

### *9.3.1 Role of driver in the system and adaptive automation*

The proposed system maintains the autonomy of the individual. In principle, the driver is able to choose (and/or switch off) task support and advanced levels of automation, if they so choose. Overall, we are starting from the point of the engaged driver, who has capacity and ability. In this way, the system supports a vision of the older adult driver as 'in control'. The role of the driver is to work in partnership with the 'co-pilot', to achieve a safe and enjoyable drive. Critically, the system treats the driver as 'capable' and 'in charge' unless it detects that the driver is incapacitated and/or there is a potential for a safety critical event (i.e. level 3 assistance/safety critical intervention). If the system detects that the driver is in a seriously impaired state and/or incapacitated, or that a safety critical event is imminent, then the principle of 'driver autonomy' is outweighed by that of safety. In such cases, authority moves to 'automation'.

### *9.3.2 Driver as a person (holistic approach)*

The proposed driving assistance system is premised on a conceptualisation of the driver/older adult as a person and not a set of symptoms/conditions (i.e. holistic approach). Specifically, biopsychosocial concepts of health and wellness inform the logic of the proposed driving assistance system. The system is concerned with all aspects of the driver's wellness, including the driver's physical, social, cognitive and emotional health.

### *9.3.3 Diversity in older adult population*

Critically, the driving assistance system logic is premised on the idea that all older adult drivers are not the same. Older adult drivers vary in many ways including body size and shape, strength, mobility, sensory acuity, cognition, emotions, driving experience, driving ability (and challenges) and confidence. In relation to driving situation and ability, we have segmented older adults into the following high-profiles or clusters—as indicated previously. These profiles have been further specified in relation to a series of personae. Critically, the system logic directly addresses the needs and requirements of these specific personae.

### *9.3.4 Upholding rights (autonomy, dignity and privacy)*

The acceptability of the proposed system largely depends upon how it treats certain issues pertaining to driver rights. Overall this technology is designed to uphold an older adult's rights. This is specifically salient in relation to preserving driver autonomy, monitoring the driver state and recording driver health information. As outlined earlier, the technology maintains the autonomy of older adults (i.e. the starting point is the engaged driver). Further, we are proposing that information captured about the person's current health and wellness and driving challenges/ events is NOT shared with other parties. In all cases, the driver is in charge of their own data and decisions about how it is stored and shared with others.

**37**

*Ethical Issues in the New Digital Era: The Case of Assisting Driving*

that improves the human condition (and not worsens it).

the potential negative social consequences).

example, social isolation and depression).

and adjudicating between conflicting goals/principles.

**10.2 System purpose and human benefit**

step in this direction.

**10.1 Ontological design, digital ethics and coping with change**

As highlighted by Fry, the introduction of new technology has the potential to transform what it means to be human [23]. In this way, the introduction of new assisted driving solutions presents a challenge to our being. Design decisions are normative—they reflect societal values concerning human agency and human identity/avoiding ageism. In particular, they provide an opportunity to foster quality of life for older adults as they age, and to promote positive ageing. Design/technology teams thus exercise choice in relation to what is valued and advancing technology

The discovery and utilisation of fire by early humans was of course transforma-

In relation to the introduction of other consumer and information technologies (for example, mobile phones and social media), many important questions were posed 'post hoc'. As stated by Heraclitus, 'One cannot step twice in the same river' [64]. These technologies have resulted in many changes to previously established social norms. Arguably, social norms in relation to identity and privacy and associated information sharing, have appeared to change—and without serious questioning of the implications of this. Further, in its early stage, designers need not properly consider the potential social consequences of this technology (for

Nonetheless, just because the horse has bolted (i.e. the automotive industry is currently advancing and testing driverless cars), does not mean there is nothing to be achieved and/or that we are powerless. As mentioned previously, the availability of this technology does not mean that we have no choice. Critically, we need to challenge existing design assumptions from the perspective of human benefit, well-being and rights. In this regard, the IEEE Global Initiative represents a positive

Salganik proposes a hope-based and principle-based approach to machine ethics [65]. This is contrasted with a 'fear-based and rule-based' approach in Social Science, and a more 'ad hoc ethics culture' as emerging in data and computer science [65]. Hope is not enough! As evidenced in this research, principles need to be both articulated and then embedded in design concepts. Importantly, human factors methods are useful here—in relation to considering different stakeholders

In line with what is argued by the IEEE, A/IS technologies can be narrowly conceived from an ethical standpoint. Such technologies might be designed to be legal, profitable and safe in their usage. However, they may not positively contribute to human well-being [25]. Critically, new driving solutions should not have 'negative consequences on people's mental health, emotions, sense of themselves, their autonomy, their ability to achieve their goals, and other dimensions of well-being' [25].

tive and positive [63]. It shaped how we eat, kept warm and how we protected ourselves. However, less examined are the negative by-products that came with fire, and the ways in which humans may or may not have adapted to them [63]. In the same way, it is important that designers consider issues pertaining to potential technology impact in terms of the three strands of health and wellness (i.e. biological, psychological and social health). In particular, designers should consider protections concerning the 'unknown' future implications of this technology (including

*DOI: http://dx.doi.org/10.5772/intechopen.88371*

**10. Discussion**
