source ref: ebookcat.html
This study shows that certain design changes in nuclear reactors would cost as much as $3 billion per life saved, whereas additional highway safety could be achieved for as little as $140,000 per life. Other analyses have resulted in somewhat different estimates, but it is clear that there is a vast discrepancy concerning funds spent to save lives from various threats.
Focusing political attention on the overall costs of averting risks would help balance such gross discrepancies. One course would be to establish a government agency or congressional committee with authority to set priorities for risk reduction. A more realistic option would make total expenditures subject to a unified congressional authorization procedure. Currently, competing proposals for risk abatement do not confront one another. New safety procedures required by the NRC for electric utilities that use nuclear power in no way impinge on the amount spent for highway safety, nor does either of these expenditures influence expenditures for testing and regulation of chemicals. The result is that safety proposals are not compared with each other, so neither government nor the media nor the public is forced to think about comparative risks.
Factual uncertainties prevent precise comparisons among risks, but precise comparisons often are not needed. There are such gross discrepancies in our approaches to different risks that much can be done to reduce these risks without having to
confront the intractable uncertainties. Compared to attacking egregious risks that have been relatively unattended, making precise comparisons among risks that already are regulated seems like fine tuning. While it might be nice to make precise comparisons and resolve the "How safe?" debate, doing so is not as important as attacking the egregious risks. Unfortunately, such fine tuning preoccupies professional risk assessors, regulators, and political activists and results in a waste of time and energy.
A second strategic approach would take advantage of risk-reduction opportunities that circumvent troublesome risks. The greenhouse issue provides a good illustration. As discussed in chapter 6, virtually all attention devoted to this problem has focused on carbon dioxide emissions from combustion of fossil fuels. Yet fossil fuels are considered fundamental to contemporary life, and the costs of significant reductions in their use could be severe; so there is widespread reluctance to take any action without a much better understanding of the risks. The net effect is that we wait and debate whether the risk is real enough to warrant action. Until the uncertainties are reduced, there is no rational basis for resolving the debate.
But there may be an alternative. Carbon dioxide is not the only contributor to the greenhouse problem. Other gases, such as nitrous oxide, are also major factors. It is conceivable that emissions of these other gases might be easier to control and might thereby offer an opportunity to at least delay or reduce the magnitude of the greenhouse effect. The 1983 NAS and EPA studies make note of this possibility but do not analyze it in any detail. By early 1986 little sustained attention had been paid to the policy options potentially available for reducing non-CO2 greenhouse gases.
Similarly, discussions of the options for combating the greenhouse effect have focused on costly restrictions on the use of high carbon fuels, but it may be possible to achieve at least some of the benefits of such restrictions through a much less costly combination of partial solutions. This combination
of solutions might include partial reforestation, plus research on crop strains better adapted to dry climates, plus partial restrictions on only the highest carbon fuels.
Another means of circumventing uncertainties about a risk is to develop a method of offsetting the risk. Quite inadvertently, the ozone threat eased when it was found that low-flying airplanes emit chemicals that help produce ozone. Could a similar approach be pursued deliberately for some technological risks? In the greenhouse case, deliberate injection of sulphur dioxide or dust into the atmosphere might result in temporary cooling similar to that achieved naturally by volcanic dust. Deliberate intervention on such a scale might pose more environmental danger than the original problem, but careful analysis of this possibility surely is warranted.
The case of nuclear power provides another possible approach to circumventing risks and uncertainties about risks. Interest is growing in the notion of inherently (or passively) safe reactors reactors for which there is no credible event or sequence of events that could lead to a meltdown. The reactor concepts now receiving the most attention include small high temperature gas cooled reactors and the PIUS reactor (a light water reactor with the core immersed in a pool of borated water). Preliminary analyses indicate that these reactors are effectively catastrophe proof. Even if the control systems and cooling systems fail, the reactors will still shut themselves down.
Skeptics argue that the concept of inherent safety probably cannot be translated into practice, and that such reactors in any case would not be economical. But in the history of commercial power reactors there has never before been a deliberate attempt to build an inherently safe reactor, and some analysts believe that these new reactors can provide, if not "walk away" safety, at least substantially reduced risks. If this is true, these new reactor concepts provide the opportunity to short circuit much of the "How safe?" debate for nuclear power plants. If it can be shown that such reactors are resistant to core melts in all credible accident scenarios, then many of the open-ended and contentious safety arguments could be avoided. While we do not know whether inherently safe reactors will prove feasible, and while there are other controversial
aspects of the nuclear fuel cycle (particularly waste disposal), nonetheless, the possibility that reactors could approach inherent safety is well worth considering. Resistance to this concept apparently is due more to organizational inertia than to sound technical arguments. Thus, in spite of the fact that the concept of inherent safety has been in existence for thirty years, society has been subjected to a bitter and expensive political battle, that a more strategic approach to this topic might have circumvented.
A very different approach to transcending factual uncertainties is to compromise. When policy makers are at an impasse over how safe a technology is or should be, it may at times be possible to reach a solution that does not depend on the resolution of the uncertainties. This strategy is already used, but it is not employed consciously enough or often enough. Because each opportunity for creative compromise necessarily is unique, there can be no standard operating procedure. However, examples of the advantages of compromise abound.
For example, the Natural Resources Defense Council, EPA, and affected industries have reached several judicially mediated agreements that have accomplished most of the limited progress made to date against toxic water pollutants. Another example is the negotiated approach to testing of priority chemicals adopted in 1980 by EPA toward the chemical industry. The possibility of creative compromise was not envisioned by the framers of the Toxic Substances Control Act, but neither was it prohibited. Numerous protracted analysis-based hearings and judicial challenges thereby have been avoided, and judging from the limited results available to date, testing appears to be proceeding fairly rapidly and satisfactorily.
Had compromises and tradeoffs been the basis for setting standards throughout the toxic substances field, many more standards could have been established than actually have been. Then they could have been modified as obvious shortcomings were recognized. Of course, compromise agreements can be very unsatisfying to parties on either side of the issue who believe they know the truth about the risks of a given endeavor. But, by observing past controversies where there was under- or overreaction to possible risks, there is a fair
prospect that all parties to future controversies gradually will become more realistic.
A third option for strengthening the catastrophe-aversion system is to create research and development programs focused explicitly on reducing key factual uncertainties. This seems an obvious approach, yet it has not been pursued systematically in any major area of technological risk except for recombinant DNA. Of course, regulatory agencies have research and development (R&D) programs that investigate safety issues, but priorities ordinarily are not well defined and research tends to be ill matched to actual regulatory debates.
The greenhouse case again provides a good illustration, particularly since the uncertainties associated with it are so widely recognized as being at the heart of the debate about whether or not action is required. The NAS report could not have been more explicit about the importance of the uncertainties to the greenhouse debate:
Given the extent and character of the uncertainty in each segment of the argument emissions, concentrations, climatic effects, environmental and societal impacts a balanced program of research, both basic and applied, is called for, with appropriate attention to more significant uncertainties and potentially more serious problems.
Yet as clearly as the report recognizes the importance of the factual uncertainties, it fails to develop a strategy for dealing with them. It merely cites a long list of uncertainties that requires attention. As we discussed in chapter 6, the NRC listed over one hundred recommendations, ranging from economic and energy simulation models for predicting long-term CO2 emissions, to modeling and data collection on cloudiness, to the effects of climate on agricultural pests.
Certainly answers to all of these questions would be interesting and perhaps useful; but, just as certainly, answers to some of them would be more important than answers to others. What are the truly critical uncertainties? What kinds of
information would make the biggest differences in deciding whether or not to take action? As R&D proceeds and information is gained, are there key warning signals for which we should watch? What would be necessary to convince us that we should not wait any longer? Policy makers and policy analysts need a strategy for selectively and intelligently identifying, tracking, and reducing key uncertainties.
A similar problem arises in the case of nuclear power. In principle, nuclear regulators should systematically identify the central remaining safety uncertainties the issues that will continue to lead to new requirements for regulations. Regulators should then devise a deliberate R&D agenda to address such uncertainties. A prime example is uncertainty about the behavior of the reactor core once it begins to melt. Clearly, this lies at the heart of the entire nuclear debate, since the major threat to the public results from core melts. Yet, as we discussed earlier, virtually no research was performed on core melts in the 1960s and 1970s.
Information and research resulting from the experience of Three Mile Island now have called into question some of the basic assumptions about core melts. For example, if the TMI core had melted entirely, according to the Kemeny Commission it probably would have solidified on the containment floor. Even the nuclear industry had assumed that a melted core would have gone through the floor. Moreover, it appears that there were a variety of ways in which the core melt could have been stopped. Prior to the accident, the common assumption was that core melts could not be stopped once underway. Also overestimated, according to some recent studies, is the amount of radioactive material predicted to escape in a serious reactor accident: prior assumptions may have been ten to one thousand times too pessimistic.
If such revised ideas about reactor accidents were to be widely accepted, they would have a substantial effect on the perceived risks of reactor accidents. But all such analyses are subject to dispute. To the extent feasible, therefore, it clearly makes sense to invest in research and development that will narrow the range of credible dispute without waiting for the equivalent of a TMI accident. As with the greenhouse effect,
what is needed is a systematic review of prevailing uncertainties and an R&D program devised to strategically address them. The uncertainties that make the biggest difference must be identified, those that can be significantly reduced by R&D must be selected, and an R&D program focused on these uncertainties must then be undertaken. In other words, a much better job can be done of using analysis in support of strategy.
As noted previously, learning from error has been an important component of the strategies deployed against risky technologies. But learning from error could be better used as a focused strategy for reducing uncertainties about risk. As such, it would constitute a fourth strategic approach for improving the efficiency and effectiveness of the catastrophe-aversion system.
The nuclear power case again offers a good illustration of the need to prepare actively for learning from error. Suppose that a design flaw is discovered in a reactor built ten years ago for a California utility company. Ideally, the flaw would be reported to the Nuclear Regulatory Commission. The NRC would then devise a correction, identify all other reactors with similar design flaws, and order all of them to institute the correction. In actual operation, the process is far more complicated and the outcome far less assured.
To begin with, in any given year the NRC receives thousands of reports about minor reactor mishaps and flaws. The agency must have a method of sifting this mass of information and identifying the problems that are truly significant. This is by no means a straightforward task, as exemplified by the flaw that triggered the Three Mile Island accident. A similar problem had been identified at the Davis-Besse reactor several years earlier, but the information that was sent to the NRC apparently was obscured by the mass of other data received by the agency. Several studies of the TMI accident noted this unfortunate oversight, and concluded that the NRC and the nuclear industry lacked an adequate mechanism for monitoring feedback. In response, the nuclear industry established an
institute for the express purpose of collecting, analyzing, and disseminating information about reactor incidents. This action represents a significant advance in nuclear decision makers' ability to learn from experience.
Even with a well-structured feedback mechanism, there are still other obstacles to learning from experience. One such obstacle arises from the contentious nature of current U.S. regulatory environments, which can actually create disincentives to learning. Given the adversarial nature of the nuclear regulatory environment, many in the nuclear industry believe that they will only hurt themselves if they propose safety improvements in reactor designs. They fear that opponents of nuclear power will use such safety proposals to argue that existing reactors are not safe enough, and that regulators will then force the industry to make the change on existing reactors, not just on new ones. This would add another round of costly retrofits.
Another obstacle to learning from experience can arise from the nature of the industry. For example, the nuclear industry is comprised of several vendors who over the years have sold several different generations of two different types of reactors to several dozen unrelated utility companies. Furthermore, even reactors of the same generation have been partially custom designed to better suit the particular site for which they were intended. This resulting nonuniformity of reactor design is a significant barrier to learning from experience, because lessons learned with one reactor are not readily applicable to others.
The design flaw uncovered at our hypothetical California utility's ten-year-old reactor probably can be generalized to the few reactors of the same generation (unless the flaw was associated with some site-specific variation of the basic design). It is less likely to apply to reactors built by the same vendor but of different generations, much less likely to apply to reactors of the same general type made by other vendors, and extremely unlikely to apply to other reactor types. Furthermore, lessons gained from experience in maintaining and operating reactors are also hard to generalize. Since reactors are owned by independent utilities, the experience of one util-
ity in operating its reactor is not easily communicated to other utilities. In many respects, therefore, each utility must go through an independent learning cycle.
There also are significant barriers to learning about most toxic chemicals. The large number of such chemicals, the vast variety of uses and sites, and the esoteric nature of the feedback make the task of monitoring and learning from experience extraordinarily difficult. Yet the EPA's tight budget and the limited resources of major environmental groups means that routine monitoring will not get the attention that is given to other more pressing needs. What a good system for such monitoring would be is in itself a major research task, but just obtaining reliable information on production volumes, uses, and exposures would be a place to start.
The point, then, is that active preparation is required to promote learning from experience. The institutional arrangements in the regulatory system must be devised from the outset with a deliberate concern for facilitating learning from error. In the nuclear power case, the ideal might be a single reactor vendor, selling a single, standardized type of reactor to a single customer. The French nuclear system comes close to this pattern.
In summary, there are at least four promising avenues for applying risk-reduction strategies more effectively. The first strategy is to make an overall comparison of risks and to focus on those that clearly are disproportionate. The second is to transcend or circumvent risks and uncertainties by employing creative compromise, making technical corrections, and paying attention to easier opportunities for risk reduction. The third strategy is to identify key uncertainties and focus research on them. The fourth is to prepare from the outset to learn from error; partly this requires design of appropriate institutions, but partly it is an attitudinal matter of embracing error as an opportunity to learn. Finally, implicit throughout this study is a fifth avenue for improvement: by better under-
standing the repertoire of strategies available for regulating risky technologies, those who want to reduce technological risks should be able to take aim at their task more consciously, more systematically, and therefore more efficiently.
Of these, the first strategy probably deserves most attention. Attacking egregious risks offers simultaneously an opportunity to improve safety and to improve cost effectiveness. As an example, consider the 1984 Bhopal, India, chemical plant disaster. The accident occurred when:
A poorly trained maintenance worker let a small amount of water into a chemical storage tank while cleaning a piece of equipment;
A supervisor delayed action for approximately one hour after a leak was reported because he did not think it significant and wanted to wait until after a tea break;
Apparently as an economy measure, the cooling unit for the storage tank had been turned off, which allowed a dangerous chemical reaction to occur much more quickly;
Although gauges indicated a dangerous pressure buildup, they were ignored because "the equipment frequently malfunctioned";
When the tank burst and the chemical was released, a water spray designed to help neutralize the chemical could not do so because the pumps were too small for the task;
The safety equipment that should have burned off the dangerous gas was out of service for repair and anyway was designed to accommodate only small leaks;
The spare tank into which the methyl isocyanate (MIC) was to be pumped in the event of an accident was full, contrary to Union Carbide requirements;
Workers ran away from the plant in panic instead of transporting nearby residents in the buses parked on the lot for evacuation purposes;
The tanks were larger than Union Carbide regulations specified, hence they held more of the dangerous chemical than anticipated;
The tanks were 75 percent filled, even though Union Carbide regulations specified 50 percent as the desirable level, so that pressure in the tank built more quickly and the overall magnitude of the accident was greater.
The length of this list of errors is reminiscent of the Three Mile Island accident. The difference between the two incidents is that TMI had catastrophe-aversion systems that prevented serious health effects, while at least two thousand died in Bhopal and nearly two hundred thousand were injured. Even though the U.S. chemical industry is largely self-regulated, most domestic plants employ relatively sophisticated safety tactics that use many of the strategies of the catastrophe-aversion system. Still, questions remain about how effectively these strategies have been implemented. For example, a 1985 chemical plant accident in Institute, West Virginia, while minor in its effects, revealed a startling series of "failures in management, operations, and equipment."
The Bhopal and Institute incidents suggest that, relative to other risks, safety issues in chemical manufacturing deserve more governmental attention than they previously have received. In addition to whatever changes are warranted at U.S. chemical plants, special attention should be paid to the process of managing risk at many overseas plants owned by U.S. firms. If the practices at the Bhopal plant were typical, safety strategies abroad are haphazard. While the Bhopal incident has led to a fundamental review of safety procedures in chemical plants worldwide, it should hardly have required a catastrophe to reveal such a vast category of hazard. This oversight demonstrates that some entire categories of risk may not yet be taken into account by the catastrophe-aversion system.
The catastrophe-aversion system likewise was not applied, until recently, to hazardous waste in the United States. State and federal laws made no special provision for toxic waste prior to the 1970s; there were no requirements for initial precautions, or for conservatism in the amounts of waste that were generated. Systematic testing for underground contamination was not required, and waste sites were not monitored for potential problems. It is a tribute to the resilience of the
ecosystem that after-the-fact cleanup now in progress has a good chance of keeping damage from past dumping below catastrophic levels. The next step is to find ways of limiting the generation of new wastes.
What does all this add up to? In our view, society's standard operating procedure should be as follows:
First, apply each of the catastrophe-aversion strategies in as many areas of risk as possible;
After this has been accomplished, proceed with more detailed inquiry, debate, and action on particular risks.
To pursue detailed debates on a risk for which a catastrophe-aversion system already is operative, continuing to protect against smaller and smaller components of that risk, is likely to be a misallocation of resources until the full range of potential catastrophes from civilian technologies has been guarded against. The "How safe?" questions that have become so much the focus of concern are matters of fine tuning; they may be important in the long run, but they are relatively minor compared to the major risks that still remain unaddressed.
At the outset of this volume we quoted a highly respected social critic, Lewis Mumford, who claimed in 1970 that "The professional bodies that should have been monitoring our technology . . . have been criminally negligent in anticipating or even reporting what has actually been taking place." Mumford also said that technological society is "a purely mechanical system whose processes can neither be retarded nor redirected nor halted, that has no internal mechanism for warning of defects or correcting them." French sociologist Jacques Ellul likewise asserted that the technological
system does not have one of the characteristics generally regarded as essential for a system: feedback. . . . [Therefore] the technological system does not tend to modify itself when it develops nuisances or obstructions. . . . [H]ence it causes the increase of irrationalities.
Reflecting on different experiences several decades earlier, Albert Schweitzer thought he perceived that "Man has lost the capacity to foresee and forestall. He will end by destroying the earth."
Although one of us began this investigation extremely pessimistic and the other was hardly an optimist, we conclude that Mumford, Ellul, Schweitzer, and many others have underestimated the resilience both of society and of the ecosystem. We found a sensible set of tactics for protecting against the potentially catastrophic consequences of errors. We found a complex and increasingly sophisticated process for monitoring and reporting potential errors. And we found that a fair amount of remedial action was taken on the basis of such monitoring (though not always the right kind of action or enough action, in our judgment).
Certainly not everyone would consider averting catastrophe to be a very great accomplishment. Most citizens no doubt believe that an affluent technological society ought to aim for a much greater degree of safety than just averting catastrophes. Many industry executives and engineers as well as taxpayers and consumers also no doubt believe that sufficient safety could be achieved at a lower cost. We agree with both. But wanting risk regulation to be more efficient or more effective is very different from being caught up in an irrational system that is leading to catastrophic destruction. We are glad and somewhat surprised to be able to come down on the optimistic side of that distinction.
Finally, what are the implications of the analysis in this volume for environmentally conscious business executives, scientists, journalists, activists, and public officials? Is it a signal for such individuals to relax their efforts? We do not intend that interpretation. The actions taken by concerned groups and individuals are an important component of the catastrophe-aversion system described in these pages. To relax the vigilance of those who monitor errors and seek their correction would be to change the system we have described. Quick reaction, sometimes even overreaction, is a key ingredient in that part of regulating risky technologies that relies on trial and error. So to interpret these results as justifying a reduction of efforts would be a gross misreading of our message.
Instead, we must redirect some of our concern and attention. Environmental groups should examine whether they could contribute more to overall safety by focusing greater attention on egregious risks that have not been brought under the umbrella of the catastrophe-aversion system instead of focusing primarily on risks that already are partially protected against. The Union of Concerned Scientists, for example, devotes extended attention to analyses of nuclear plant safety but has contributed almost nothing on the dangers of coal combustion, international standards for chemical plants, or toxic waste generation egregious risks that have not been taken into account by catastrophe-aversion strategies. Regardless of whether contemporary nuclear reactors are safe enough, there is no question that they have been intensively subjected to the restraints of the catastrophe-aversion system. We doubt that much more safety will be produced by further debate of the sort that paralyzed nuclear policy making during the 1970s and 1980s. In general, we believe it is time for a more strategic allocation of the (always limited) resources available for risk reduction.
The main message of this volume, however, has been that the United States has done much better at averting health and safety catastrophes than most people realize, considering the vast scope and magnitude of the threats posed by the inventiveness of science and industry in the twentieth century. Careful examination of the strategies evolved to cope with threats from toxic chemicals, nuclear power, recombinant DNA, ozone depletion, and the greenhouse effect suggests that we have a reasonably reliable system for discovering and analyzing potential catastrophes. And, to date, enough preventive actions have been taken to avoid the worst consequences. How much further improvement will be achieved depends largely on whether those groups and individuals concerned with health and safety can manage to win the political battles necessary to extend and refine the strategies now being used. Because we have a long way to go in the overall process of learning to manage technology wisely, recognizing and appreciating the strengths of our catastrophe-aversion system may give us the inspiration to envision the next steps.