**10.1 Ontological design, digital ethics and coping with change**

As highlighted by Fry, the introduction of new technology has the potential to transform what it means to be human [23]. In this way, the introduction of new assisted driving solutions presents a challenge to our being. Design decisions are normative—they reflect societal values concerning human agency and human identity/avoiding ageism. In particular, they provide an opportunity to foster quality of life for older adults as they age, and to promote positive ageing. Design/technology teams thus exercise choice in relation to what is valued and advancing technology that improves the human condition (and not worsens it).

The discovery and utilisation of fire by early humans was of course transformative and positive [63]. It shaped how we eat, kept warm and how we protected ourselves. However, less examined are the negative by-products that came with fire, and the ways in which humans may or may not have adapted to them [63]. In the same way, it is important that designers consider issues pertaining to potential technology impact in terms of the three strands of health and wellness (i.e. biological, psychological and social health). In particular, designers should consider protections concerning the 'unknown' future implications of this technology (including the potential negative social consequences).

In relation to the introduction of other consumer and information technologies (for example, mobile phones and social media), many important questions were posed 'post hoc'. As stated by Heraclitus, 'One cannot step twice in the same river' [64]. These technologies have resulted in many changes to previously established social norms. Arguably, social norms in relation to identity and privacy and associated information sharing, have appeared to change—and without serious questioning of the implications of this. Further, in its early stage, designers need not properly consider the potential social consequences of this technology (for example, social isolation and depression).

Nonetheless, just because the horse has bolted (i.e. the automotive industry is currently advancing and testing driverless cars), does not mean there is nothing to be achieved and/or that we are powerless. As mentioned previously, the availability of this technology does not mean that we have no choice. Critically, we need to challenge existing design assumptions from the perspective of human benefit, well-being and rights. In this regard, the IEEE Global Initiative represents a positive step in this direction.

Salganik proposes a hope-based and principle-based approach to machine ethics [65]. This is contrasted with a 'fear-based and rule-based' approach in Social Science, and a more 'ad hoc ethics culture' as emerging in data and computer science [65]. Hope is not enough! As evidenced in this research, principles need to be both articulated and then embedded in design concepts. Importantly, human factors methods are useful here—in relation to considering different stakeholders and adjudicating between conflicting goals/principles.
