**2. Definition of terms**

We should first define our terms. The phrase 'risk management' will refer primarily to the risk of life and limb of human beings, present and future and to the life and health of the planet. Secondarily, it will refer to the taking of risk with the possessions, e.g., the wealth of human beings. For the most part, the discussion will refer to the primary sense of the term and only occasionally in special contexts, to the second. It should be pointed out, at the outset, that the phrase 'risk management' is already biased in favor of risk. The phrase

© 2012 Allinson, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Allinson, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

implies that the taking of risk is either necessary and/or advantageous and if it carries any negative consequences, these consequences can be mitigated or eliminated by proper management. This bias toward risk is assumed in the acceptance of the use of the phrase 'risk management'. A major purpose of this investigation is to call into question this built-in bias toward risk.

On the Very Idea of Risk Management: Lessons from the Space Shuttle *Challenger* 135

The reaction of Richard Cook, the budget analyst at the time, shows how the language of

How extraordinary: possible "loss of mission and life" doesn't have a negative connotation.4 Cook goes on to say that there was no negative connotation because it had been deemed an acceptable risk and, moreover, there had never been any criteria for defining an "acceptable risk". This was a result of using a failure modes effects analysis without any quantitative risk measures. In other words, the odds of the risk of a catastrophe occurring to the Challenger, were conjured out of thin air, not calculated, because it was a subjective

Three conclusions present themselves. Firstly, there must exist considerable ethical blindness when loss of life has no negative connotation. Secondly, there must be considerable ethical blindness when loss of life is somehow considered an acceptable risk. Thirdly, there must be considerable epistemological blindness when a notion of "acceptable risk" can be in use without being based on any objective measurements. Cook, one of the few if not the only source on the Challenger who goes into detail on this point, points to one study, conducted in 1984, which concluded that the chance of a Solid Rocket Booster explosion was one in thirty-five launches.5 Nevertheless, the Marshall managers spoke of

In 1977, NASA commissioned a group called the Wiggins Group to study the possibility of flight failures and by examining data for all previous space launches, a likely failure rate of one in fifty-seven was derived. According to Cook, NASA complained that many of the launch vehicles the Wiggins group included were not similar enough to the *Challenger* shuttle to be part of the data base. So, Wiggins changed the probable failure rate to one in 100. Another study conducted by the Air Force placed the likely failure rate for the shuttle in a similar range to the Wiggins analysis. According to Cook, none of these studies were publicized and most of the newspaper reporters who covered NASA most likely had never heard of them. When NASA was forced to arrive at an official number, their chief engineer came up with the infamous and arbitrary estimate of one in 100,000. As Cook put it, 'At a rate of twenty-four launches per year, this meant that NASA expected the shuttle to fail catastrophically only once every 4,167 years.6 In the language of risk utilized in this chapter introduced below, the possible incidence was therefore negligible. By presenting, not calculating, the *incidence* of risk as virtually non-existent, it was possible to immunize oneself against the realities of the *consequences* of the risk. One could then make a decision to risk other people's lives, because statistical probability had eliminated the problematic dimension of the risk factor, the consequences, from the equation. When a figure is used, such a one in 100,000, one might assume that this is a calculated risk since it is put in the

communication with Richard Cook at the time of my writing the *Challenger* chapters in my book, *Saving Human Lives:* 

engineering judgment that was not based upon any previous performance data.

"acceptable risk". If you "lost" one astronaut, that was "data" in the risk equation.

4 Ibid. 5 Ibid., p. 356. <sup>6</sup> *Ibid*., pp. 126-7.

*Lessons in Management Ethics.*

choice removes ethical considerations from one's consciousness:

In the strictest sense, the proper reference for the inquiry should be 'risky decision taking' or 'risky decision making' rather than 'risk management'. Such a refining of the subject of inquiry would make the concept of risk either neutral or questionable since the 'taking' of risk implies negative consequences whilst 'risk management' carries with it the hidden meaning that risk is already being protected against or absorbed by effective management policies. The phrase 'risk management' grants risk a protective coating such that the consequences of the risk are camouflaged. As a result, the following discussion will focus primarily on the entire question of 'taking risk' or making 'risky decisions' in the first place and only secondarily with the 'management of risk'.1 It is important to make this distinction because the entire question of the ethics of risk must be considered in the first place.2 *Where, to any degree, whatsoever, 'risk' is already acceptable, the question of the ethics of risk becomes diluted in value. When the taking of risk is itself under question, the ethics of risk takes on greater meaning.* 

In the specific case from which we will draw most of our information and discussion, the launching of the U. S. space shuttle, *Challenger*, the action to be taken, was considered to be an 'acceptable risk'. The question of 'acceptable risk' was applied to the action to be taken, not to the decision to take the action. Had the notion of 'acceptable risk' been applied to the decision to launch, some ethical responsibility for the *decision* would have been present. As it turned out, the ethical responsibility for the decision to launch was conspicuous by its very absence.

The ethical issues involved in the decision to launch were further nullified by the choice of terminology utilized to classify the level of risk involved in the malfunction of the part that eventually did malfunction (the O-ring). The label utilized was criticality 1, the definition of the consequence of its occurrence being, loss of mission and life. When one of the four managers who overrode the unanimous decision of 14 engineers and managers not to launch was asked at the Presidential Commission hearings, whether the phrase 'loss of mission and life' had a negative connotation, the answer given by the manager, Larry Mulloy, was that such a description had no negative connotation and simply meant that you have a single point failure with no back-up and the failure of that single system is catastrophic.3

<sup>1</sup> For the sake of convenience, in general we will employ the term 'risk taking' as a short-hand for 'risky decision making' which will always stand for 'risking the consequences to the principal risk takers as a result of decision making'. The phrase 'principal risk takers' refers to those whose lives will be affected by the occurrence of the risk, the primary actors who will be directly involved in taking the risk in the risky situation; e.g., in the case of the *Challenger*, the astronauts and civilians aboard rather than the 'decision makers', the four middle managers.

<sup>2</sup> It is gratifying to learn that the U.S. government is currently teaching the terms and definitions for the proper understanding of risk management that the author of this chapter originated that are at the basis of the ideas in this chapter in its educational training courses for FEMA, the Federal Emergency Management Association.

<sup>3</sup> Richard C. Cook, *Challenger Revealed, An Insider's Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age*, New York: Avalon, 2006, p. 243. Without Richard Cook's early articles in the New York Times, it is possible that the entire *Challenger* investigation would not have occurred. It was a source of inspiration to be in

The reaction of Richard Cook, the budget analyst at the time, shows how the language of choice removes ethical considerations from one's consciousness:

How extraordinary: possible "loss of mission and life" doesn't have a negative connotation.4

Cook goes on to say that there was no negative connotation because it had been deemed an acceptable risk and, moreover, there had never been any criteria for defining an "acceptable risk". This was a result of using a failure modes effects analysis without any quantitative risk measures. In other words, the odds of the risk of a catastrophe occurring to the Challenger, were conjured out of thin air, not calculated, because it was a subjective engineering judgment that was not based upon any previous performance data.

Three conclusions present themselves. Firstly, there must exist considerable ethical blindness when loss of life has no negative connotation. Secondly, there must be considerable ethical blindness when loss of life is somehow considered an acceptable risk. Thirdly, there must be considerable epistemological blindness when a notion of "acceptable risk" can be in use without being based on any objective measurements. Cook, one of the few if not the only source on the Challenger who goes into detail on this point, points to one study, conducted in 1984, which concluded that the chance of a Solid Rocket Booster explosion was one in thirty-five launches.5 Nevertheless, the Marshall managers spoke of "acceptable risk". If you "lost" one astronaut, that was "data" in the risk equation.

In 1977, NASA commissioned a group called the Wiggins Group to study the possibility of flight failures and by examining data for all previous space launches, a likely failure rate of one in fifty-seven was derived. According to Cook, NASA complained that many of the launch vehicles the Wiggins group included were not similar enough to the *Challenger* shuttle to be part of the data base. So, Wiggins changed the probable failure rate to one in 100. Another study conducted by the Air Force placed the likely failure rate for the shuttle in a similar range to the Wiggins analysis. According to Cook, none of these studies were publicized and most of the newspaper reporters who covered NASA most likely had never heard of them. When NASA was forced to arrive at an official number, their chief engineer came up with the infamous and arbitrary estimate of one in 100,000. As Cook put it, 'At a rate of twenty-four launches per year, this meant that NASA expected the shuttle to fail catastrophically only once every 4,167 years.6 In the language of risk utilized in this chapter introduced below, the possible incidence was therefore negligible. By presenting, not calculating, the *incidence* of risk as virtually non-existent, it was possible to immunize oneself against the realities of the *consequences* of the risk. One could then make a decision to risk other people's lives, because statistical probability had eliminated the problematic dimension of the risk factor, the consequences, from the equation. When a figure is used, such a one in 100,000, one might assume that this is a calculated risk since it is put in the

134 Risk Management – Current Issues and Challenges

*itself under question, the ethics of risk takes on greater meaning.* 

bias toward risk.

implies that the taking of risk is either necessary and/or advantageous and if it carries any negative consequences, these consequences can be mitigated or eliminated by proper management. This bias toward risk is assumed in the acceptance of the use of the phrase 'risk management'. A major purpose of this investigation is to call into question this built-in

In the strictest sense, the proper reference for the inquiry should be 'risky decision taking' or 'risky decision making' rather than 'risk management'. Such a refining of the subject of inquiry would make the concept of risk either neutral or questionable since the 'taking' of risk implies negative consequences whilst 'risk management' carries with it the hidden meaning that risk is already being protected against or absorbed by effective management policies. The phrase 'risk management' grants risk a protective coating such that the consequences of the risk are camouflaged. As a result, the following discussion will focus primarily on the entire question of 'taking risk' or making 'risky decisions' in the first place and only secondarily with the 'management of risk'.1 It is important to make this distinction because the entire question of the ethics of risk must be considered in the first place.2 *Where, to any degree, whatsoever, 'risk' is already acceptable, the question of the ethics of risk becomes diluted in value. When the taking of risk is* 

In the specific case from which we will draw most of our information and discussion, the launching of the U. S. space shuttle, *Challenger*, the action to be taken, was considered to be an 'acceptable risk'. The question of 'acceptable risk' was applied to the action to be taken, not to the decision to take the action. Had the notion of 'acceptable risk' been applied to the decision to launch, some ethical responsibility for the *decision* would have been present. As it turned out, the ethical responsibility for the decision to launch was conspicuous by its very absence.

The ethical issues involved in the decision to launch were further nullified by the choice of terminology utilized to classify the level of risk involved in the malfunction of the part that eventually did malfunction (the O-ring). The label utilized was criticality 1, the definition of the consequence of its occurrence being, loss of mission and life. When one of the four managers who overrode the unanimous decision of 14 engineers and managers not to launch was asked at the Presidential Commission hearings, whether the phrase 'loss of mission and life' had a negative connotation, the answer given by the manager, Larry Mulloy, was that such a description had no negative connotation and simply meant that you have a single point

1 For the sake of convenience, in general we will employ the term 'risk taking' as a short-hand for 'risky decision making' which will always stand for 'risking the consequences to the principal risk takers as a result of decision making'. The phrase 'principal risk takers' refers to those whose lives will be affected by the occurrence of the risk, the primary actors who will be directly involved in taking the risk in the risky situation; e.g., in the case of the *Challenger*,

2 It is gratifying to learn that the U.S. government is currently teaching the terms and definitions for the proper understanding of risk management that the author of this chapter originated that are at the basis of the ideas in this

3 Richard C. Cook, *Challenger Revealed, An Insider's Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age*, New York: Avalon, 2006, p. 243. Without Richard Cook's early articles in the New York Times, it is possible that the entire *Challenger* investigation would not have occurred. It was a source of inspiration to be in

failure with no back-up and the failure of that single system is catastrophic.3

the astronauts and civilians aboard rather than the 'decision makers', the four middle managers.

chapter in its educational training courses for FEMA, the Federal Emergency Management Association.

communication with Richard Cook at the time of my writing the *Challenger* chapters in my book, *Saving Human Lives: Lessons in Management Ethics.*

<sup>4</sup> Ibid.

<sup>5</sup> Ibid., p. 356.

<sup>6</sup> *Ibid*., pp. 126-7.

mathematical language of percentages and statistics. But, this was not a "calculated risk" at all, but fantasy parading as mathematics. A figure picked out of thin air had granted the decision makers moral immunity.

On the Very Idea of Risk Management: Lessons from the Space Shuttle *Challenger* 137

to surrender. We also know that in the general, unknown risk category, that a plane may explode in mid-air. This risk taking is minimized by careful and regular inspection of the mechanical parts of the airplane and a replacement of said parts and said plane on a needful basis. Other aspects of this risk are minimized by guarding against a drunken pilot, hijacking by suicidal terrorists, etc. In such cases, risk is minimized. It is more accurate to consider that the risk in these cases is *minimized* rather than *managed*, because its possibility of occurrence is reduced rather than the occurrence of its risk being managed. The latter understanding is how the term 'risk management' might well be construed. In fact, it is difficult to understand what the term 'management' means in the case of 'risk management'.

We assume, as an ethical premise, that there should never be a risk taken to potential life and limb unless it is absolutely necessary. A good example of this would be the Hippocratic Oath taken by physicians which begins with the axiom: '*Primum non Nocere*', 'Do no Harm'. One takes risk, as with surgery, only when it is necessary to promote or safeguard health. In

What about cases of advantage rather than absolute necessity? For example, let us again consider the case of commercial airplane travel. There is certainly risk involved and it would seem to be the case that the concept of risk management would come into play. Upon closer examination, however, when one considers the safety precautions that are taken through mechanical inspection, etc., one realizes that it is 'risk taking' that is modified, that is, one is reducing the risk involved, rather than "managing" an existing risk. *One minimizes the risk* 

There is a confusion that is frequently made between general and unknown risk that is operative in the universe and any specific risk that is known in advance to exist. For example, whenever one gets out of one's bed in the morning, one may trip, fall, crack one's skull and have a concussion. This is the general and unknown risk that is operative in the universe. We should not construe risk in these terms as this kind of risk exists, for the most

Risks that are foreknown in advance to the principals involved in the risk-taking are the only kind of risks that our discussion should consider. A classic case in point of the contrast that exists between general, unknown risk and specifically foreknown risk is the case of the faulty and dangerous O-rings that were known in advance to exist (by managers and engineers though not by the principal risk takers) prior to the fatal flight of the space shuttle *Challenger.* The classic case of the *Challenger* disaster can be used to illustrate the fallacies of the concept of 'risk management' and the need to replace this concept with the more accurate, new concepts of 'risk taking' or 'risky decision making'. While other cases could also be chosen, the availability of overwhelming, documented evidence in the case of the

other words, risk is justifiable only when it is absolutely necessary *in the service of life*.

**5. General unknown risk versus specific and foreknown risk** 

*involved: one is not managing risk; one is minimizing risk.* 

part, outside of human control and intervention.

**4. The ethics of Risk** 
