**3. Predicting benefits and risk**

It comes as little surprise that inventors and innovators are better prepared and more willing to predict the benefits of their ideas and nascent projects than the concomitant risks. However, such bias is little comfort when mistakes, miscues and misdeeds are uncovered. As evidence, many of the case studies used in introductory engineering ethics courses have an element of selective bias toward the predicted benefits of an innovation.

The inventor or sponsor of a new medical device is likely to be very optimistic about the benefits, but predicting possible negative outcomes may be more obscure. Better credit card security devices could tread upon privacy. A genetically modified organism may do its job quite well in making medicine or cleaning up wastes, but may have risks, such as adverse effects on biodiversity. What these three seemingly diverse examples have in common is that the benefits are often more obvious and more immediate than the risks, which may be years or decades in the future.

Of course, hindsight is often 20/20 and is always easier than foresight. Predictions of an emerging technology's risks require a balance between being so overly cautious as to lead to loss of innovation and the introduction of large opportunity costs. Likewise, the prediction must not be so optimistic, or the risk prediction so rife with oversimplifications and assumptions, that the risks are mischaracterized or completely missed.

Another common element of the case studies mentioned is that the risks were not completely transparent or even ignored by decision makers (often by people with more power in the decision making process than the engineers or by engineers who had "forgotten" some of the ethical canons of the profession). Sometimes, the only reason the unethical decision making comes to light is a memo or note from a designer that implicates the decision makers at the higher level.

Applying the philosophical tools of *reductio ad absurdum*, do we blame the Wright brothers for the misuse of an aircraft or drone? Do we blame Louis Pasteur for the use of anthrax in bioterrorism? Of course not. Somewhere along the way, however, the misuse of a technology must be properly considered. In the rapidly changing world of genetics, systems biology, nanotechnology, systems medicine and information technology, we do not have the luxury of waiting a few decades to observe the downsides of emerging technologies.
