**2. Ethical awareness and decision making**

Instilling ethics at a university or research institution can be quite challenging since most researchers have only briefly engaged in venues outside of those found in their technical discipline. Their experiences with ethics generally have been under the mantle of academic integrity. Thus, it is necessary to build a bridge between academic integrity and research. A common extrapolation in scientific research is to transition from the "data-rich" to the "datapoor"; from the more certain to the uncertain. Ethics falls within the domain of the datapoor and uncertain for most scientists, engineers and technologists. That said, we can start from some basics and transition to the more complex aspects of ethics likely to confront technologists engaged in cutting-edge research and development.

### **2.1 The drivers education analogy**

Research ethics can be likened to driver's education training, where the basics of driving a vehicle from a textbook (i.e. the "Rules of the Road") is augmented by hypothetical cases and scenarios to engage the student in "what ifs" (e.g. what factors led to a bad outcome, like a car wreck?). Society realizes that new drivers are at risk and are placing other members of society at risk. Teenagers are asking to handle an object with a lot of power (e.g. hundreds of horsepower), a large mass (greater than a ton), with a potential to accelerate rapidly and travel at high speeds. The problem is that the new driver cannot be expected to understand the societal implications of using this technology (the automobile). To raise the consciousness (and hopefully their conscientiousness), they are shown films of what happens to drivers who do not take their driving responsibilities seriously. Likewise, ethics training may include films and discuss cases that scare researchers in hopes that this will remind them of how to act when an ethical situation arises. This takes place in a safe environment (the classroom with a mentor who can share experiences), rather than relying on the one's own experiences.

But, memory fades with time. Psychologists refer to this as extinction, which can be graphed much like a decay curve familiar to engineers (See Fig. 3). If an event is not extremely dramatic, its details will soon fade in memory. This may be why ethics training courses employ cases with extremely bad outcomes (e.g. failed medical devices, operations gone horribly wrong, bridge failures, fatal side effects, scientific fraud on a global scale) as opposed to more subtle cases (e.g. the unlikely misuse or off-label use of an otherwise welldesigned product).

Extinction could also occur if an unpleasant event happens to someone else, such as the scenarios in the driver's education films. One uncertainty associated with "canned" cases, particularly online "you be the judge" cases, is that the trainee does not directly relate to the situation or scenario. Thus, the individual technologist may not expect the bad outcomes to happen to his or her technology, even if there are strong parallels to one's own real-world situation.

Events are much more memorable when directly translatable to one's own experiences. Anyone who has been in a car wreck will remember it for many years. Hearing about another's case means more if one has experienced a very similar situation. For example, new drivers have little experiential data from which draw, which is analogous to new technologies. By definition, the ethics of emerging technologies must often be extrapolated from rather dissimilar scenarios.

Instilling ethics at a university or research institution can be quite challenging since most researchers have only briefly engaged in venues outside of those found in their technical discipline. Their experiences with ethics generally have been under the mantle of academic integrity. Thus, it is necessary to build a bridge between academic integrity and research. A common extrapolation in scientific research is to transition from the "data-rich" to the "datapoor"; from the more certain to the uncertain. Ethics falls within the domain of the datapoor and uncertain for most scientists, engineers and technologists. That said, we can start from some basics and transition to the more complex aspects of ethics likely to confront

Research ethics can be likened to driver's education training, where the basics of driving a vehicle from a textbook (i.e. the "Rules of the Road") is augmented by hypothetical cases and scenarios to engage the student in "what ifs" (e.g. what factors led to a bad outcome, like a car wreck?). Society realizes that new drivers are at risk and are placing other members of society at risk. Teenagers are asking to handle an object with a lot of power (e.g. hundreds of horsepower), a large mass (greater than a ton), with a potential to accelerate rapidly and travel at high speeds. The problem is that the new driver cannot be expected to understand the societal implications of using this technology (the automobile). To raise the consciousness (and hopefully their conscientiousness), they are shown films of what happens to drivers who do not take their driving responsibilities seriously. Likewise, ethics training may include films and discuss cases that scare researchers in hopes that this will remind them of how to act when an ethical situation arises. This takes place in a safe environment (the classroom with a mentor who can share experiences), rather than relying

But, memory fades with time. Psychologists refer to this as extinction, which can be graphed much like a decay curve familiar to engineers (See Fig. 3). If an event is not extremely dramatic, its details will soon fade in memory. This may be why ethics training courses employ cases with extremely bad outcomes (e.g. failed medical devices, operations gone horribly wrong, bridge failures, fatal side effects, scientific fraud on a global scale) as opposed to more subtle cases (e.g. the unlikely misuse or off-label use of an otherwise well-

Extinction could also occur if an unpleasant event happens to someone else, such as the scenarios in the driver's education films. One uncertainty associated with "canned" cases, particularly online "you be the judge" cases, is that the trainee does not directly relate to the situation or scenario. Thus, the individual technologist may not expect the bad outcomes to happen to his or her technology, even if there are strong parallels to one's own real-world

Events are much more memorable when directly translatable to one's own experiences. Anyone who has been in a car wreck will remember it for many years. Hearing about another's case means more if one has experienced a very similar situation. For example, new drivers have little experiential data from which draw, which is analogous to new technologies. By definition, the ethics of emerging technologies must often be extrapolated

**2. Ethical awareness and decision making** 

**2.1 The drivers education analogy** 

on the one's own experiences.

from rather dissimilar scenarios.

designed product).

situation.

technologists engaged in cutting-edge research and development.

#### **Time since initial learning event**

Fig. 3. Hypothetical memory extinction curves. Curve A represents the most memorable case and Curves B and C less memorable. Curve C is extinguished completely with time. While the events in Curves A and B are remembered, less information about the event is remembered in Curve B because the event is less dramatic. The double arrow represents the difference in the amount of information retained in long-term memory. Source: D.A. Vallero (2007). Biomedical Ethics for Engineers: Ethics and Decision Making in Biomedical and Biosystem Engineering. Elsevier Academic Press, Burlington, MA.

Training programs employ some measures to overcome or at least ameliorate extinction. Annual or recurring training programs addressing ethics and responsible conduct are common at many institutions (See Fig. 4).

Governing bodies are increasingly stressing the importance of responsible research. Thus, universities and research institutions have instituted training programs to ensure that research is conducted in a responsible manner. In the United States, for example, the Office of Research Integrity (ORI 2011) requires that all publicly funded entities include Responsible Conduct of Research Training (RCR). This is an important first step in instilling and enforcing ethical behavior, but ethical awareness is merely the first step in decision making related to emerging technologies.

Ethical Decisions in Emergent Science, Engineering and Technologies 29

is a necessary step in ethical growth, it is wholly insufficient for the ethics associated with the complexities of emerging technologies. Returning to our driving analogy, the ethics of emerging technologies is more akin to driving in the LeMans, a grueling, 24-hour race, with varying road conditions and unexpected obstacles. Predicting the benefits and risks associated with an emerging technology (like the LeMans) is much more uncertain than most technical endeavors, with rewards and risk well above that ever experienced by the

The path to combined competence and integrity includes a number of steps. First, the researcher learns what is the right thing to do, technically and ethically. Next, the technologist learns how and when to decide what is right under various scenarios. Ultimately, the researcher's behavior reflects moral development. Along the way, at least within the traditional professions, the technologist advances from academic preparation to internships and practica, to membership in professional societies, leading to a morally

Evaluating whether a new technology is "worth it" depends on the metrics employed to

Thankfully, most scientists engage in efforts justified by noble ends, even if one's particular research or practice provides but a small contribution to those ends. However, the moral imperative has two parts, the work itself *and* the obligation to do the right thing, i.e. Kant's concept of "duty." If deploying a technology fails to meet either or both of these

Evaluation of the ethics of a technology is not a discrete snapshot of the technology. The entire life cycle must be considered. Any technology that is poorly conceived, designed and operated fails the test of duty, even if the stated purpose is noble. An example would be to miss some key detrimental traits of a strain of genetically modified bacterium that effectively detoxifies a water pollutant. The endpoint, destruction of a pollutant, meets half of the categorical imperative (noble objective), but if the bacteria adversely affect nearby ecosystems by destroying beneficial microbes, the researcher is engaging in unethical behavior. The research fails the test of universalization since, by extension, all such ecosystems would be harmed every time these organisms are used. That is, if all bioengineers behaved this way, the world would be a much riskier place to live. This example also illustrates that emerging technologies are complex, with commensurately

The categorical imperative is actually a professional metric. The distinguishing factor of professionalism is trust. Engineers, physicians and other professionals enter into a social contract that matches professional authority with accountability to the public. The vendor's credo, *caveat emptor,* does not hold for professionals. Rather, their credo is *credat emptor*; "The client can trust us!" As the first canon of National Society of Professional Engineers (NSPE) states, engineers must "hold paramount the safety, health and welfare of the public." Technical professions and research institutes must enhance their members' technical competence to address newly emergent and seemingly intractable problems, such as security, health, and safety. Simultaneously, the profession must instill an ethos that addresses these problems in a just way. The two premises must be integrated into any

average technologist (or the average driver).

exemplary career.

compare the benefits to the risks.

complex ethical considerations.

technological advancement.

requirements, it is considered to be morally unacceptable.

#### **Time since initial learning event**

Fig. 4. Hypothetical extinction curves. The solid line represents a single learning event with no reminders (reinforcements). The dashed line shows reminder events in addition to the initial learning event. The double arrow is the difference in retained information retained in long-term memory as result of adding reminders. Source: D.A. Vallero (2007). Biomedical Ethics for Engineers: Ethics and Decision Making in Biomedical and Biosystem Engineering. Elsevier Academic Press, Burlington, MA.

#### **2.2 Ethical decision making**

Awareness is followed by sound decision making (Pickus 2008). Learning enough to make the best ethical decision, as is the case in learning to drive a car, results from a combination of formal study, interactive learning, and practice. While considering cases is helpful, it is no substitute for experiential learning. As evidence, technical professions require a period of time during which a newly minted engineer, medical practitioner, and designer can learn from a more seasoned professional. Much of this is to gain the benefits of real-world experience, without the new technologist having to suffer through painful trial and error, making mistake after mistake, before finally learning enough about the profession beyond textbooks to begin practice (society, clients and patients rightfully would not allow this!). But, this stage is also to help the new professional become inculcated into a new scientific and professional community, with its distinct and often unforgiving norms and mores. This can be likened to the new driver spending time behind the wheel with a seasoned driver. Only after a defined accompaniment stage, may the driver be expected to know the subtleties of merging left, parallel parking and other skills gained only after ample practice. Responsibility is gained incrementally. Whereas, the formal professional development stage

Reminder events

Fig. 4. Hypothetical extinction curves. The solid line represents a single learning event with no reminders (reinforcements). The dashed line shows reminder events in addition to the initial learning event. The double arrow is the difference in retained information retained in long-term memory as result of adding reminders. Source: D.A. Vallero (2007). Biomedical Ethics for Engineers: Ethics and Decision Making in Biomedical and Biosystem Engineering.

**Time since initial learning event**

Awareness is followed by sound decision making (Pickus 2008). Learning enough to make the best ethical decision, as is the case in learning to drive a car, results from a combination of formal study, interactive learning, and practice. While considering cases is helpful, it is no substitute for experiential learning. As evidence, technical professions require a period of time during which a newly minted engineer, medical practitioner, and designer can learn from a more seasoned professional. Much of this is to gain the benefits of real-world experience, without the new technologist having to suffer through painful trial and error, making mistake after mistake, before finally learning enough about the profession beyond textbooks to begin practice (society, clients and patients rightfully would not allow this!). But, this stage is also to help the new professional become inculcated into a new scientific and professional community, with its distinct and often unforgiving norms and mores. This can be likened to the new driver spending time behind the wheel with a seasoned driver. Only after a defined accompaniment stage, may the driver be expected to know the subtleties of merging left, parallel parking and other skills gained only after ample practice. Responsibility is gained incrementally. Whereas, the formal professional development stage

Elsevier Academic Press, Burlington, MA.

**2.2 Ethical decision making** 

**0%**

**100%**

**Percent of learned information**

is a necessary step in ethical growth, it is wholly insufficient for the ethics associated with the complexities of emerging technologies. Returning to our driving analogy, the ethics of emerging technologies is more akin to driving in the LeMans, a grueling, 24-hour race, with varying road conditions and unexpected obstacles. Predicting the benefits and risks associated with an emerging technology (like the LeMans) is much more uncertain than most technical endeavors, with rewards and risk well above that ever experienced by the average technologist (or the average driver).

The path to combined competence and integrity includes a number of steps. First, the researcher learns what is the right thing to do, technically and ethically. Next, the technologist learns how and when to decide what is right under various scenarios. Ultimately, the researcher's behavior reflects moral development. Along the way, at least within the traditional professions, the technologist advances from academic preparation to internships and practica, to membership in professional societies, leading to a morally exemplary career.

Evaluating whether a new technology is "worth it" depends on the metrics employed to compare the benefits to the risks.

Thankfully, most scientists engage in efforts justified by noble ends, even if one's particular research or practice provides but a small contribution to those ends. However, the moral imperative has two parts, the work itself *and* the obligation to do the right thing, i.e. Kant's concept of "duty." If deploying a technology fails to meet either or both of these requirements, it is considered to be morally unacceptable.

Evaluation of the ethics of a technology is not a discrete snapshot of the technology. The entire life cycle must be considered. Any technology that is poorly conceived, designed and operated fails the test of duty, even if the stated purpose is noble. An example would be to miss some key detrimental traits of a strain of genetically modified bacterium that effectively detoxifies a water pollutant. The endpoint, destruction of a pollutant, meets half of the categorical imperative (noble objective), but if the bacteria adversely affect nearby ecosystems by destroying beneficial microbes, the researcher is engaging in unethical behavior. The research fails the test of universalization since, by extension, all such ecosystems would be harmed every time these organisms are used. That is, if all bioengineers behaved this way, the world would be a much riskier place to live. This example also illustrates that emerging technologies are complex, with commensurately complex ethical considerations.

The categorical imperative is actually a professional metric. The distinguishing factor of professionalism is trust. Engineers, physicians and other professionals enter into a social contract that matches professional authority with accountability to the public. The vendor's credo, *caveat emptor,* does not hold for professionals. Rather, their credo is *credat emptor*; "The client can trust us!" As the first canon of National Society of Professional Engineers (NSPE) states, engineers must "hold paramount the safety, health and welfare of the public." Technical professions and research institutes must enhance their members' technical competence to address newly emergent and seemingly intractable problems, such as security, health, and safety. Simultaneously, the profession must instill an ethos that addresses these problems in a just way. The two premises must be integrated into any technological advancement.

Ethical Decisions in Emergent Science, Engineering and Technologies 31

However, as technologies become more complicated, the potential impacts become more obscure and increasingly difficult to predict. The "sword of Damocles" is comprised of all potential, but unintended consequences. This means that new decision support tools must

One metric of the ethics of a technology is whether it poses or could pose *unacceptable risk.*  Risk is the likelihood of negative outcomes. Too much risk means the new technology has failed society. Societal expectations of acceptable risk are mandated by technological standards and specifications, such as health codes and regulations, zoning and building codes and regulations, principles of professional engineering and medical practice, criteria in design guidebooks, and standards promulgated by international agencies (e.g. ISO, or the International Standards Organization) and national standard-setting bodies (e.g. ASTM, or

Specific technologies are additionally sanctioned by organizations. For example, genetic modification of microbes, i.e. medical biotechnologies, are sanctioned by institutes of biomedical sciences, such as the American Medical Association and regulatory agencies, whereas food safety and environmental agencies, such as the U.S. Food and Drug Administration, the U.S. Department of Agriculture and the U.S. Environmental Protection Agency, and their respective state counterpart agencies, are responsible for new biotechnologies in their respective areas. Since emerging biotechnologies carry a reasonable potential for intentional misuse, a number of their research and operational practices are regulated and overseen by homeland security and threat reduction agencies, especially related to microbes that have been or could be used as biological agents in warfare and terrorism.

Of course, two terms used in the previous paragraphs beg for clarity. What is *unacceptable*  and what is *reasonable*? And, who decides where to draw the line between unacceptable and acceptable and between unreasonable and reasonable? It is not ethical to expose people to unacceptable risk. The acceptability of a technology has both inherent and use aspects. For example, radiation emitted from a device is inherently hazardous. However, if no one comes near the device it may present little risk, notwithstanding its inherent properties. Thus, the use of the device drives its acceptability. As such, acceptability is value-laden. A device that destroys a tumor may be well worth the exposure to its inherently hazardous properties.

Likewise, deciding whether a risk of a technology is reasonable also depends on its expected uses. One benchmark of technological acceptability is that a risk be "as low as reasonably practical" (ALARP), a concept coined by the United Kingdom Health and Safety Commission (2011). The Commission is responsible for health and safety regulation in Great Britain. The Health and Safety Executive and local government are the enforcing authorities who work in support of the Commission. The range of possibilities fostered by this standard can be envisioned as three domains (see Fig. 5). In the uppermost domain, the risk is clearly unacceptable. The bottom indicates generally acceptable risk. However, the size of these domains varies considerably on perspective. There is seldom consensus and often never

Risks in the ALARP region need to be managed scientifically and ethically to produce an acceptable outcome. Thus, the utility of a particular application of a new biotechnology, for example, can be based upon the greatest good that the use of the technology will engender, compared to the potential harm it may cause. For example, consider a genetically

be employed to consider risks and costs over the life of the technology and beyond.

the American Society for Testing and Materials).

unanimity.

The distinguishing characteristic of a professional is what the Ancients referred to as *ethike aretai.* Roughly translated from Greek, it means "skill of character"(Pence 2003). This is a hybrid of both technical competence and ethics; not separate, but integrated throughout the life cycle of an innovation. Thus, the ethical technologist is not only competent and skillful within a technical discipline, but is equally trustworthy and honorable.
