**3.1 Risk: The ethical yardstick**

Ethical decision making for pending technologies combines technical and ethical factors. It makes use of multiplex optimization or benchmarking, where only certain outcomes are acceptable. A technically acceptable outcome may be ethically unacceptable, and an ethically acceptable outcome may be technically unacceptable. The tools needed to evaluate the benefits and risks of emerging technologies share aspects of most decision support tools.

The distinguishing characteristic of a professional is what the Ancients referred to as *ethike aretai.* Roughly translated from Greek, it means "skill of character"(Pence 2003). This is a hybrid of both technical competence and ethics; not separate, but integrated throughout the life cycle of an innovation. Thus, the ethical technologist is not only competent and skillful

It comes as little surprise that inventors and innovators are better prepared and more willing to predict the benefits of their ideas and nascent projects than the concomitant risks. However, such bias is little comfort when mistakes, miscues and misdeeds are uncovered. As evidence, many of the case studies used in introductory engineering ethics courses have

The inventor or sponsor of a new medical device is likely to be very optimistic about the benefits, but predicting possible negative outcomes may be more obscure. Better credit card security devices could tread upon privacy. A genetically modified organism may do its job quite well in making medicine or cleaning up wastes, but may have risks, such as adverse effects on biodiversity. What these three seemingly diverse examples have in common is that the benefits are often more obvious and more immediate than the risks, which may be

Of course, hindsight is often 20/20 and is always easier than foresight. Predictions of an emerging technology's risks require a balance between being so overly cautious as to lead to loss of innovation and the introduction of large opportunity costs. Likewise, the prediction must not be so optimistic, or the risk prediction so rife with oversimplifications and

Another common element of the case studies mentioned is that the risks were not completely transparent or even ignored by decision makers (often by people with more power in the decision making process than the engineers or by engineers who had "forgotten" some of the ethical canons of the profession). Sometimes, the only reason the unethical decision making comes to light is a memo or note from a designer that implicates

Applying the philosophical tools of *reductio ad absurdum*, do we blame the Wright brothers for the misuse of an aircraft or drone? Do we blame Louis Pasteur for the use of anthrax in bioterrorism? Of course not. Somewhere along the way, however, the misuse of a technology must be properly considered. In the rapidly changing world of genetics, systems biology, nanotechnology, systems medicine and information technology, we do not have the

Ethical decision making for pending technologies combines technical and ethical factors. It makes use of multiplex optimization or benchmarking, where only certain outcomes are acceptable. A technically acceptable outcome may be ethically unacceptable, and an ethically acceptable outcome may be technically unacceptable. The tools needed to evaluate the benefits and risks of emerging technologies share aspects of most decision support tools.

luxury of waiting a few decades to observe the downsides of emerging technologies.

within a technical discipline, but is equally trustworthy and honorable.

an element of selective bias toward the predicted benefits of an innovation.

assumptions, that the risks are mischaracterized or completely missed.

**3. Predicting benefits and risk** 

years or decades in the future.

the decision makers at the higher level.

**3.1 Risk: The ethical yardstick** 

However, as technologies become more complicated, the potential impacts become more obscure and increasingly difficult to predict. The "sword of Damocles" is comprised of all potential, but unintended consequences. This means that new decision support tools must be employed to consider risks and costs over the life of the technology and beyond.

One metric of the ethics of a technology is whether it poses or could pose *unacceptable risk.*  Risk is the likelihood of negative outcomes. Too much risk means the new technology has failed society. Societal expectations of acceptable risk are mandated by technological standards and specifications, such as health codes and regulations, zoning and building codes and regulations, principles of professional engineering and medical practice, criteria in design guidebooks, and standards promulgated by international agencies (e.g. ISO, or the International Standards Organization) and national standard-setting bodies (e.g. ASTM, or the American Society for Testing and Materials).

Specific technologies are additionally sanctioned by organizations. For example, genetic modification of microbes, i.e. medical biotechnologies, are sanctioned by institutes of biomedical sciences, such as the American Medical Association and regulatory agencies, whereas food safety and environmental agencies, such as the U.S. Food and Drug Administration, the U.S. Department of Agriculture and the U.S. Environmental Protection Agency, and their respective state counterpart agencies, are responsible for new biotechnologies in their respective areas. Since emerging biotechnologies carry a reasonable potential for intentional misuse, a number of their research and operational practices are regulated and overseen by homeland security and threat reduction agencies, especially related to microbes that have been or could be used as biological agents in warfare and terrorism.

Of course, two terms used in the previous paragraphs beg for clarity. What is *unacceptable*  and what is *reasonable*? And, who decides where to draw the line between unacceptable and acceptable and between unreasonable and reasonable? It is not ethical to expose people to unacceptable risk. The acceptability of a technology has both inherent and use aspects. For example, radiation emitted from a device is inherently hazardous. However, if no one comes near the device it may present little risk, notwithstanding its inherent properties. Thus, the use of the device drives its acceptability. As such, acceptability is value-laden. A device that destroys a tumor may be well worth the exposure to its inherently hazardous properties.

Likewise, deciding whether a risk of a technology is reasonable also depends on its expected uses. One benchmark of technological acceptability is that a risk be "as low as reasonably practical" (ALARP), a concept coined by the United Kingdom Health and Safety Commission (2011). The Commission is responsible for health and safety regulation in Great Britain. The Health and Safety Executive and local government are the enforcing authorities who work in support of the Commission. The range of possibilities fostered by this standard can be envisioned as three domains (see Fig. 5). In the uppermost domain, the risk is clearly unacceptable. The bottom indicates generally acceptable risk. However, the size of these domains varies considerably on perspective. There is seldom consensus and often never unanimity.

Risks in the ALARP region need to be managed scientifically and ethically to produce an acceptable outcome. Thus, the utility of a particular application of a new biotechnology, for example, can be based upon the greatest good that the use of the technology will engender, compared to the potential harm it may cause. For example, consider a genetically

Ethical Decisions in Emergent Science, Engineering and Technologies 33

Therefore ALARP depends on a defensible margin of safety that is both protective and reasonable Hence, reaching ALARP necessitates qualitative and/or quantitative measures of the amount of risk reduced and costs incurred with the design decisions. The ALARP principle assumes that it is possible to compare marginal improvements in safety (marginal risk decreases) with the marginal costs of the increases in reliability (UK Health and Safety

To ascertain possible risks from emerging technologies, the first step is to identify the hazard (a potential threat) and then to develop a scenario of events that could take place to unleash the potential threat and lead to an effect. To assess the importance of a given scenario, the severity of the effect and the likelihood that it will occur in that scenario is calculated. This

The relationship between the severity and probability of a risk follows a general equation

 *R* = *f*(*S*, *P*) (1) Where risk (*R*) is a function (*f*) of the severity (*S*) and the probability (*P*) of harm. The risk

 *R* = *S* × *P* (2) The traditional health risk assessment, for example, begins with the identification of a hazard, which is comprised of a summary of an agent's physicochemical properties and routes and patterns of exposure and a review of toxic effects. (National Academy of Sciences 2002).

The risks associated with emerging technologies are doubly uncertain since the hazards are difficult to predict and the likely exposure can be variable and highly uncertain. Analogies of the risk of the new technology can seldom be directly extrapolated from existing technologies, *and* the emerging technology often takes place at scales much larger or smaller than better documented technologies. For example, if researchers are engineering materials at scales below 100 nanometers (i.e. nanotechnology), even the physical behavior is unknown. Since risk is a function of hazard and the exposure to that hazard, reliable assessment of that risk depends on sound physical characterization of the hazard. However, if even the physics is not well understood due to the scale and complexity of the research,

Indeed, the ethical uncertainty of emerging technologies is propagated in time and space. For example, many research institutions have numerous *nano-scale* projects (within a range of a few angstroms). Nascent areas of research include ways to links protein engineering with cellular and tissue biomedical engineering applications (e.g. drug delivery and new devices); ultra-dense computer memory; nonlinear dynamics and the mechanisms governing emergent phenomena in complex systems; and state of the art nano-scale sensors (including photonic ones). Complicating the potential societal risks, much of this research frequently employs biological materials and self-assembly devices to design and build some strikingly different kinds of devices. Among the worst case scenarios has to do with the replication of the "nanomachines." Advancing the state-of-the-science to improve the quality of life (e.g. treating cancer, Parkinson's disease, Alzheimer's disease, and improving life expectancies, or cleaning

combination of the hazard and exposure particular to that scenario constitutes the risk.

equation can be simplified to be a product of severity and probability:

the expected hazards to living things is even less well understood.

up contaminated hazardous wastes) can introduce different risks (Vallero 2007).

Commission 2011).

(Doblhoff-Dier 1999):

engineered bacterium that breaks downs a highly toxic contaminant that has seeped into the groundwater more efficiently than other available techniques (e.g. pumping out the groundwater and treating it aboveground using air stripping). If the only basis for success were cleaning up the site, this would be a fairly straightforward. That is, if goodness were based solely on this utility, the project is acceptable. However, such single-variable assessments are uncommon and can lead to erroneous predictions of outcome. For example, the engineer must evaluate whether the use of the biotechnology can introduce side effects, such as the production of harmful new substances, or whether the genetically engineered organisms could change the diversity and condition of neighboring microbial populations.

#### **NEGLIGIBLE RISK**

Fig. 5. Three regions of risk tolerance. *Source:* United Kingdom Health and Safety Commission (1998).

engineered bacterium that breaks downs a highly toxic contaminant that has seeped into the groundwater more efficiently than other available techniques (e.g. pumping out the groundwater and treating it aboveground using air stripping). If the only basis for success were cleaning up the site, this would be a fairly straightforward. That is, if goodness were based solely on this utility, the project is acceptable. However, such single-variable assessments are uncommon and can lead to erroneous predictions of outcome. For example, the engineer must evaluate whether the use of the biotechnology can introduce side effects, such as the production of harmful new substances, or whether the genetically engineered organisms could change the diversity and condition of neighboring microbial populations.

> Risk cannot be justified except in extraordinary circumstances

Tolerable only if risk

impracticable or if its cost is grossly disproportionate to improvement gained.

reduction is

Tolerable if cost of reduction would exceed improvement gained.

No need for detailed

working to demonstrate ALARP.

**INTOLERABLE** 

**RISK**

**RISK AS LOW** 

**REASONABLY POSSIBLE**

**AS** 

**BROADLY ACCEPTABLE** 

**RISK**

Fig. 5. Three regions of risk tolerance. *Source:* United Kingdom Health and Safety

**NEGLIGIBLE RISK**

Commission (1998).

Therefore ALARP depends on a defensible margin of safety that is both protective and reasonable Hence, reaching ALARP necessitates qualitative and/or quantitative measures of the amount of risk reduced and costs incurred with the design decisions. The ALARP principle assumes that it is possible to compare marginal improvements in safety (marginal risk decreases) with the marginal costs of the increases in reliability (UK Health and Safety Commission 2011).

To ascertain possible risks from emerging technologies, the first step is to identify the hazard (a potential threat) and then to develop a scenario of events that could take place to unleash the potential threat and lead to an effect. To assess the importance of a given scenario, the severity of the effect and the likelihood that it will occur in that scenario is calculated. This combination of the hazard and exposure particular to that scenario constitutes the risk.

The relationship between the severity and probability of a risk follows a general equation (Doblhoff-Dier 1999):

$$\mathcal{R} = f(\mathcal{S}, P) \tag{1}$$

Where risk (*R*) is a function (*f*) of the severity (*S*) and the probability (*P*) of harm. The risk equation can be simplified to be a product of severity and probability:

$$R = S \times P \tag{2}$$

The traditional health risk assessment, for example, begins with the identification of a hazard, which is comprised of a summary of an agent's physicochemical properties and routes and patterns of exposure and a review of toxic effects. (National Academy of Sciences 2002).

The risks associated with emerging technologies are doubly uncertain since the hazards are difficult to predict and the likely exposure can be variable and highly uncertain. Analogies of the risk of the new technology can seldom be directly extrapolated from existing technologies, *and* the emerging technology often takes place at scales much larger or smaller than better documented technologies. For example, if researchers are engineering materials at scales below 100 nanometers (i.e. nanotechnology), even the physical behavior is unknown. Since risk is a function of hazard and the exposure to that hazard, reliable assessment of that risk depends on sound physical characterization of the hazard. However, if even the physics is not well understood due to the scale and complexity of the research, the expected hazards to living things is even less well understood.

Indeed, the ethical uncertainty of emerging technologies is propagated in time and space. For example, many research institutions have numerous *nano-scale* projects (within a range of a few angstroms). Nascent areas of research include ways to links protein engineering with cellular and tissue biomedical engineering applications (e.g. drug delivery and new devices); ultra-dense computer memory; nonlinear dynamics and the mechanisms governing emergent phenomena in complex systems; and state of the art nano-scale sensors (including photonic ones). Complicating the potential societal risks, much of this research frequently employs biological materials and self-assembly devices to design and build some strikingly different kinds of devices. Among the worst case scenarios has to do with the replication of the "nanomachines." Advancing the state-of-the-science to improve the quality of life (e.g. treating cancer, Parkinson's disease, Alzheimer's disease, and improving life expectancies, or cleaning up contaminated hazardous wastes) can introduce different risks (Vallero 2007).

Ethical Decisions in Emergent Science, Engineering and Technologies 35

protected enough?" Managing risks consists of balancing among alternatives. Usually, no single way to prevent potential problems is available. Whether a risk is acceptable is determined by a process of making decisions and implementing actions that flow from these decisions to reduce the adverse outcomes or, at least to lower the chance that negative

Technologists can expect that whatever risk remains after their technologies reach the users, those potentially affected will not necessarily be satisfied with that risk. People want less risk, all other things being equal. Derby and Keeney (1981) have stated that "acceptable risk is the risk associated with the best of the available alternatives, not with the best of the alternatives which we would hope to have available."Calculating the risks associated with

1. The actual values of all important variables cannot be known completely and, thus

2. The physical and biological sciences of the processes leading to the risk can never be fully understood, so the physical, chemical and biological algorithms written into

3. Risk prediction using models depend on probabilistic and highly complex processes

The decision to proceed with most engineering designs or projects is based upon some sort of "risk-reward" paradigm, and should be a balance between benefits and costs (UK Department of Environment 1984). When comparing benefits to costs, values are inaccurate. Given the uncertainty, even a benefit/cost ratio that appears to weigh more heavily toward benefits, i.e.

For those involved in technologies, there are two general paths to ethical decisions, i.e. duty and outcome. Duty is at the heart of Immanuel Kant's (1785) "categorical imperative":

The categorical imperative is at the heart of duty ethics (so called "deontology"), invoking the question as to whether one's action (or inaction) will make for a better world if all others in that same situation were to act in the same way. Thus, the technology itself can be ethically neutral, whereas the individual action's virtue or vice is seen in a comprehensive manner. The unknowns surrounding emerging technologies may cause one to add safeguards or even to abandon a technology or a particular use of the technology. The obligation of the technologist is to consider the effects of universalizing one's new technology, from an all inclusive

Outcome-based ethics (so called "teleology") can be encapsulated in John Stuart Mill's (1863) utilitarianism's axiom of "greatest good for the greatest number of people." Even the most extreme forms of outcome-based ethics are moderated. For example, Mill added a "harm principle" which requires that no one be harmed in the pursuit of a noble outcome. That is, even though an emerging technology is expected to lead to benefits for the majority, it may still be unethical if it causes undue harm to even one person. John Rawls, who can be

*Act only according to that maxim by which you can at the same time will that it should become a* 

these alternatives is inherently constrained by three conditions (Morgan 1981):

well above 1, may not provide an ample margin of safety given the risks involved.

perspective, considering all the potential good and all the potential bad.

cannot be projected into the future with complete certainty.

predictive models will propagate errors in the model.

that make it infeasible to predict many outcomes.

**4. Ethical constructs** 

*universal law.* 

consequences will occur (The Royal Society 1992).

The uncertain, yet looming threat of global climate change can be attributed in part to technological and industrial progress. Emergent technologies can help to assuage these problems by using alternative sources of energy, such as wind and solar, to reduce global demand for fossil fuels. However, these can have side effects, such as the low-probability but highly important outcomes of genetic engineering, e.g. genetically modified organisms (GMOs) used to produce food. GMOs may well help with world food and energy needs, but are not a panacea.

The renowned physicist Martin Rees (2003) has voiced an extreme perspective related to the apprehension about nanotechnology, particularly its current trend toward producing "nanomachines." Biological systems, at the subcellular and molecular levels, could very efficiently produce proteins, as they already do for their own purposes. By tweaking some genetic material at a scale of a few angstroms, parts of the cell (e.g. the ribosome) that synthesize molecules could start producing myriad molecules designed by scientists, such as pharmaceuticals and nanoprocessors for computing. Rees is concerned that such assemblers could start self-replicating (like they always have), but without any "shut-off." Some have called this the "gray goo" scenario, i.e. creating of an "extinction technology" from the cell's unchecked ability to to replicate itself exponentially if part of their design is to be completely "omnivorous," using all matter as food! No other "life" on earth would exist if this "doomsday" scenario were to occur.

Though extreme and (hopefully) unlikely, this scenario calls attention to the problem that ethics usually follows technological advancement. All events that lead to even this extreme outcome are individually possible. Most life systems survive within a fairly narrow range of conditions. Slight modifications can be devastating. So, emerging technologies call for even more vigilance and foresight. Engineers and scientists are expected to push the envelopes of knowledge. We are rewarded for our eagerness and boldness. The Nobel Prize, for example, is not given to the chemist or physicist who has aptly calculated important scientific phenomena, with no new paradigms. It would be rare indeed for engineering societies to bestow awards only to the engineer who for an entire career used only proven technologies to design and build structures. This begins with our general approach to contemporary scientific research. Technologists are often rugged individualists in a quest to add new knowledge. For example, aspirants seeking Ph.D.s must endeavor to add knowledge to their specific scientific discipline. Scientific journals are unlikely to publish articles that do not at least contain some modicum of originality and newly found information.

Innovation is rewarded. Unfortunately, there is not a lot of natural incentive for the innovators to stop what they are doing to "think about" possible ethical dilemmas propagated by their discoveries. However, the engineering profession is beginning to come to grips with to this issue; for example, in emergent "macroethical" areas like nanotechnology, neurotechnology, and even sustainable design approaches (National Academy of Sciences 2004).

Thus, those engaged in emerging technologies are expected to push the envelopes of possible applications and simultaneously to investigate likely scenarios, from the very beneficial to the worst-case ("doomsday") outcomes. This link between fundamental work and outcomes becomes increasingly crucial as such research reaches the marketplace relatively quickly and cannot be confined to the "safety" and rigor of the laboratory and highly controlled scale-ups.

Technological development thrusts the innovator into uncomfortable venues. Rarely is there a simple answer to the questions "How healthy is healthy enough?" And "How protected is protected enough?" Managing risks consists of balancing among alternatives. Usually, no single way to prevent potential problems is available. Whether a risk is acceptable is determined by a process of making decisions and implementing actions that flow from these decisions to reduce the adverse outcomes or, at least to lower the chance that negative consequences will occur (The Royal Society 1992).

Technologists can expect that whatever risk remains after their technologies reach the users, those potentially affected will not necessarily be satisfied with that risk. People want less risk, all other things being equal. Derby and Keeney (1981) have stated that "acceptable risk is the risk associated with the best of the available alternatives, not with the best of the alternatives which we would hope to have available."Calculating the risks associated with these alternatives is inherently constrained by three conditions (Morgan 1981):


The decision to proceed with most engineering designs or projects is based upon some sort of "risk-reward" paradigm, and should be a balance between benefits and costs (UK Department of Environment 1984). When comparing benefits to costs, values are inaccurate. Given the uncertainty, even a benefit/cost ratio that appears to weigh more heavily toward benefits, i.e. well above 1, may not provide an ample margin of safety given the risks involved.
