source ref: ebookcat.html
Of all modern technologies, the one most closely associated in the public's mind with potential catastrophe is nuclear power. If a substantial portion of the radioactive material contained in a reactor were released to the atmosphere, the results could be disastrous. No one on either side of the nuclear debate denies this; what each argues about is the likelihood of a serious accident and the magnitude of its effects.
The potential hazard from nuclear power is very different from that posed by use of toxic chemicals. With the exception of manufacturing accidents or extraordinarily careless disposal of chemical wastes, damage from chemicals typically is dispersed and, at worst, would result in a large number of individual illnesses and deaths spread out over space and time. In contrast, a nuclear catastrophe can result from a single, large accident. Because nuclear consequences are so severe, regulators cannot use the same trial-and-error strategies they employ for toxic chemical control. Nor can they rely on advance testing, because the large-scale nature of the hazard makes a definitive, controlled study of a nuclear accident impractical. And short of prohibiting the construction of a nuclear power plant, there is no equivalent to the strategy of screening out particularly risky chemicals.
This chapter examines the strategies developed by regulators of nuclear power in their attempts to cope with this more concentrated type of catastrophe.
In 1947 the Atomic Energy Commission (AEC) established a Reactor Safeguards Committee (RSC) comprised of leading atomic scientists from outside AEC; the first chairman was Edward Teller. The committee's function, as its name implies, was to determine whether the reactors then being planned by the AEC could be built without endangering public safety. As its basic approach to reactor safety, the committee decided to continue the practice established by the Manhattan Project during World War II (that is, the effort to develop the atomic bomb) of keeping reactors isolated from the population as much as possible. Thus, if a serious release of radioactivity did occur, the effects on public safety would be minimized. Each reactor was to be surrounded by two concentric areas. The inner area would be unpopulated and under the complete control of the AEC, and the outer area would be populated by no more than ten thousand people. The size of the two areas depended partly on the powerfulness of the reactor: the greater the power, the larger the areas. The size of the outer area also depended on the type of reactor and on the meteorology, hydrology, and seismology of the geographical region.
The first test of this safety plan occurred in 1946 - 47 and involved a reactor designed to test materials used in more advanced reactors. The materials testing reactor was relatively large for its day, although it is about one-tenth the size of current reactors. The AEC originally planned to construct this reactor at Argonne National Laboratory just outside Chicago, where it would be accessible to scientists at the lab. However, the Reactor Safeguards Committee ruled that the reactor was too large to be built so close to a city. Either the reactor would have to be redesigned and scaled down in power, or it would have to be moved to a less populated site.
The director of the lab, who might have been expected to fight for this project, instead endorsed removal to a remote site as a reasonable policy: "For a nation with the land space of ours and with the financial resources of ours, adopting a
very conservative attitude on safety is not an unnecessary luxury." In fact, he proposed the establishment of a very remote site where all the early versions of reactors could be tested. This proposal was "most enthusiastically" endorsed by the Reactor Safeguards Committee and approved by the AEC in May 1949. The site was in a barren desert section of Idaho about forty miles from Idaho Falls, then a city of twenty thousand.
Most early nuclear reactors were built at the Idaho test station. The only major exception was a reactor then under development by General Electric at its Knolls Atomic Power Lab (KAPL) outside Schenectady, New York. Although this reactor was as powerful as the materials testing reactor, the scientists at KAPL proposed to build it at a site near Knolls, which was about ten miles away from any heavily populated areas. Given the size of the reactor, this proposal caused some concern among the Reactor Safeguards Committee. On the other hand, building the reactor in Idaho might have prevented the Knolls personnel from continuing their reactor research. The committee feared that this "would be disastrous to the leadership of the United States in atomic energy." So the RSC in fall 1947 "concluded unenthusiastically that a location near Schenectady might be acceptable."
In response to the RSC's concerns, plans for the KAPL reactor changed significantly. Scientists at the laboratory developed new ways to ensure that public exposure would be minimized in the event of a serious release of radioactivity. They proposed that the entire reactor facility be enclosed in a gas-tight steel sphere. The sphere would be designed to withstand "a disruptive core explosion from nuclear energy release, followed by sodium-water and air reactions." It would thus contain within the reactor facility "any radioactivity that might be produced in a reactor accident." The AEC accepted this proposal, which thereafter became a major safety component in all civilian nuclear power plant construction. Moreover, the Knolls reactor was still to be built in a relatively unpopulated area; containment was not considered a complete substitute for remote siting.
In its early years the RSC made a number of less crucial
safety decisions. In approving a small reactor for Argonne National Lab, for example, the committee required that the amount of plutonium and radioactive waste generated in the reactor be strictly limited. In evaluating this reactor as well as the one at Knolls, the RSC considered not only the risk of accidents but also the potential for sabotage. In addition, the committee discussed in a preliminary way a variety of other safeguards, including emergency arrangements for cooling a reactor by flooding and other automatic safety devices.
The important point is that by the early 1950s, a general strategy for coping with the potential for catastrophe had emerged. Reactors were to be built on very remote sites or on relatively remote sites with containment. Decision makers believed that this policy would substantially protect the public should a serious reactor accident occur.
At about the same time that this safety strategy was evolving, the first nuclear submarine reactors were being developed. The earliest models were built at remote test sites on land, and the reactors that were actually used in submarines were constructed soon after these land-based versions. Unfortunately, the strategies used in protecting against serious accidents with land-based reactors were not applicable to submarine reactors: "Since the sixty-man submarine crew had no avenue of escape while the ship was at sea and major ports were generally large population centers, remote siting could not be relied upon to acceptably limit the consequences of an accident. Nor could containment be reasonably engineered for a submarine."
This led scientists and engineers to devise an entirely different approach: rather than attempt to contain or isolate the effects of accidents, they attempted to prevent accidents, and they employed a variety of tactics toward this end. While most of these tactics consisted of applying unusually stringent standards to such procedures as operator training, program auditing, and quality control, two of the tactics designing with wide margins for error and with redundancies were less com-
mon to industrial practices and were devised to reduce the probability of serious nuclear accidents.
The components and systems of most machines those of a car, for instance are built to withstand the average or likely set of operating conditions. But submarine reactors were built to withstand "the worst credible set of circumstances, rather than . . . average or probable conditions." Each of the components was constructed of materials that could withstand substantially higher than likely temperatures and pressures, and each of the systems was designed to operate for substantially longer periods of time than necessary.
Not only were the components and systems built to withstand extreme conditions, but also redundancies were included in the design to serve as back ups in case systems or components did fail. Each safety-related function of the reactor could be performed by more than one component or system. For example, if one system for injecting the control rods into the core failed, another independent system could be used, or if a primary set of pumps failed to operate, a back-up set could be put into operation.
For land-based reactors, then, the early strategy was to isolate and contain the effects of accidents. For sea-based reactors, the strategy was to prevent accidents altogether.
By the late 1950s the AEC required that both prevention and containment strategies be applied to land-based nuclear reactors. The prevention strategy followed more or less the same pattern used for submarine reactors systems were conservatively designed with wide margins for error and redundancies. For instance, the material that sheathed the reactor fuel had to withstand higher temperatures and more corrosive conditions than were likely, and the pressure vessel (which contained the reactor core) and the coolant pipes were built to withstand much higher than expected pressures and temperatures.
Reactors also were designed so that if any safety-related component or system failed, a back-up component or system would perform the necessary function. Each reactor was required to have two independent off-site sources of electrical power with completely independent transmission lines capable of providing all the power required to run the plant. Further, more than one method had to be provided for injecting control rods into the core, several coolant loops had to be passed through the core (so that if one failed, the others would still be available), and back-up pumps and valves had to be provided.
The AEC also required that reactors be equipped with emergency safety systems; this constituted an additional level of redundancy. Engineers attempted to anticipate malfunctions and sequences of malfunctions that might lead to serious releases of radioactivity, and they then designed emergency systems that would operate when such malfunctions occurred. For example, if one or more coolant pipes ruptured and too little coolant reached the core, the fuel might melt and radioactivity could be released. To counteract this possibility, all reactors were required to be equipped with "emergency core cooling systems," which consisted of alternate sources of water (coolant) that could be sprayed or pumped into the core if the coolant level fell too low.
In spite of these preventive measures, the AEC recognized that serious accidents might still occur, so it also required that measures be taken to protect humans and the environment from the effects of possible nuclear accidents. The early method for accomplishing such protection was to build reactors away from populated areas. By the end of the 1950s, however, the AEC began to modify this approach. Over time, reactor sites that had been initially remote were becoming populated, and there were few remote sites in areas where nuclear power would be most commercially viable. In addition, remote siting involved increasingly expensive power transmission costs. Largely in response to growing pressure from the nuclear industry, the AEC evolved a new policy that shifted the reliance on remote siting to a combination of siting and containment safeguards: the less remote the site, the more extensive the other required safeguards. By the
late 1950s it was required that all reactors be designed with containment. The AEC stipulated that the reactor's containment building be strong enough to withstand "the pressures and temperatures associated with the largest credible energy release" arising from a reactor accident and be almost gastight so that only very small amounts of radioactivity could leak into the atmosphere.
To determine whether the containment system proposed for a reactor was adequate, the AEC attempted to determine whether it could withstand the "maximum credible accident." There are many conceivable sequences of events that can lead to a release of fission products. Some, such as failure of the pressure vessel in which the core is located, were considered incredible, and these were eliminated from consideration. Remaining sequences of events were considered credible, and the maximum credible accident, as its name suggests, was the most severe of these events.
For the light water reactors used in the United States, two ways such an accident could occur were envisioned. One was "an inadvertent insertion of reactivity [such as an increase in rate of chain reaction] leading to fuel damage and rupture of the primary coolant line," and a second was "brittle shear [a sudden break] of a primary coolant line with subsequent failure of the emergency cooling system." Once the maximum credible accident had been specified, the AEC calculated the most extreme consequences of this accident. On this basis, the AEC screened applications for nuclear reactor licenses, requiring that the design submitted be sufficient to prevent radiation from reaching the public in the event of such a maximum credible accident.
The underlying assumption of the AEC in setting forth its two-pronged safety strategy of the 1950s and early 1960s was that errors in reactor design, construction, and operation would in fact occur. As much as nuclear regulators sought to eliminate error, they never believed that this could be achieved; the uncertainties and complexity associated with nuclear technology are too great. The basic premise of reactor design was to make reactors not free of errors but forgiving of them. Thus, reactors
were built on the assumption that at some point, critical components would fail, temperatures and pressures would rise higher than expected, safety systems and back-up safety systems would fail, and even the emergency safety systems might fail. Reactors were to be built to withstand such circumstances.
The AEC modified its approach to reactor safety in 1966 - 67 when the size of reactors sharply increased and doubts emerged as to whether containment would withstand a maximum credible accident. The largest reactor in operation in the early 1960s produced two hundred megawatts of electricity, but, beginning in 1963, orders began to be placed for much more powerful reactors. Three reactors ordered in this year were two to three times more powerful than any reactor then in operation. Seven ordered in 1965 were three to five times more powerful, and twenty-one ordered in 1966 were six times more powerful. This increase in reactor power had a crucial impact on the AEC's safety strategy. If the coolant were lost in one of the large reactors and the emergency cooling system failed, a breach of containment and an escape of fission products into the environment might possibly occur. For example, the reactor core might melt into a molten mass, which in turn might melt through the reactor vessel, fall to the floor of the containment building and melt through that as well. (This scenario came to be known as the "China Syndrome.")
Failure of containment was not inevitable in the event of a core melt; even in large reactors, the containment shields were strong enough to withstand many of the possible effects. But as reactor size increased, containment could no longer be fully relied upon to withstand the most serious possible effects.
The AEC responded to this situation by reviewing ways to reinforce containment. One manufacturer proposed a core catcher, a water-cooled, stainless steel device placed below the reactor vessel that presumably would catch the reactor core if
it melted through the reactor vessel. Other possibilities included larger containment vessels, dual or triple containment shields, and systems for venting or igniting accumulated hydrogen, but none of these devices could ensure containment of the worst credible effects of core melts. The behavior of melted reactor cores was unknown. Furthermore, the range of possible consequences of a core melt was sufficiently broad that no single device could cover all the possibilities. For example, a core catcher might help if the core melted through the vessel, but it would be of little help in the event of a dangerous buildup of pressure.
Most observers concluded that no practical system could be devised for guaranteeing containment in the event of a serious core melt in a large reactor. Core catchers and similar devices might reduce the probability that containment would fail, but they could not make the probability low enough for the AEC to continue to rely on containment as the primary defense. The AEC had to modify its strategy.
Therefore, in 1967 the AEC decided to emphasize its prevention strategy. If it could no longer guarantee containment of fission products released by core melts in large reactors, the AEC would attempt to prevent the fission products from being released in the first place. This meant the inclusion of wider margins for error, more redundancies, and enhanced emergency safety systems. The change was one of degree: the larger reactors were to be designed even more conservatively than the smaller ones.
This increase in conservative design is illustrated by changes made in the requirements for emergency core cooling systems. Emergency cooling systems previously were designed to handle only relatively small leaks or breaks in the normal cooling system. In 1966, the capacity of these systems was substantially increased, and the new systems were designed to protect against the largest and most severe possible primary coolant system pipe breaks. In addition, since a large break would be accompanied by a violent release of steam that might hurl missiles of ruptured pipe, measures were taken to protect vulnerable components of the emergency systems.
Redundancies were added to the system as well. Pressur-
ized light water reactors now would have independent systems for emergency cooling. One system was passive and consisted of several very large tanks of water. If one of the large primary cooling pipes were to break, the pressure in the core would decrease below the pressure in the water tanks and the tanks would open and "rapidly discharge a large volume of water . . . into the reactor vessel and core." Emergency cooling also was provided via an injection system for pumping water into the core. Both high- and low-pressure pumps were available for different types of pipe breaks, and each pump had its own back up. Thus, the emergency core cooling system, which itself constituted a second level of redundancy, was comprised of two systems, each of which was redundantly designed.
The shift toward greater reliance on prevention did not represent a change in the AEC's underlying approach: the AEC's goal was still to make reactors forgiving of errors. However, the increased emphasis on prevention did make the regulatory process considerably more complicated. As long as containment was considered to be guaranteed, the main issue in the regulatory process was whether the particular containment system and site proposed for a new reactor would withstand the worst credible effects of the worst (or maximum) credible accident. There might be disagreement over the definition of credible accidents and over the maximum amount of their effects, but at least the range of possible issues open to debate was relatively restricted.
The shift in emphasis to prevention opened up a much larger set of debatable issues. In order to prevent radiation releases, regulators had to anticipate not only the worst effects of accidents but also all the credible potential causes. Included in these causes were failures in the coolant system, the electrical system, the control system, and so on; the emergency systems had to prevent these failures from triggering serious accidents. Nuclear power regulators needed to anticipate the variety of reactor conditions that might arise as a result of the many possible failures, the emergency systems responses to these conditions, and the consequences of those responses.
For example, in order to ensure that the emergency cooling system was capable of cooling the reactor core in the event of a double-ended break in the largest cooling pipe, estimates had to be made of the following conditions, among others:
Distribution of temperatures in the core after such a break;
Effects on the core of a loss of coolant;
Effects of violent releases of steam from the core as coolant is injected into the core;
Possible reactions of the fuel cladding (the metal in which the fuel is sheathed) with water and steam in the core after the loss of coolant;
The rate at which emergency coolant should be injected into the core.
Some of these conditions were virtually impossible to calculate without actual experimental meltdowns; estimating other conditions was time consuming and subject to a range of professional judgment. Requiring regulators to base safety policies on calculations of these conditions resulted in a more complex and difficult regulatory process. Debates about possible causes of serious accidents and reliability of safety systems arose. What if the pressure vessel failed? What if the emergency core cooling system did not flood the core as quickly as anticipated? What if, through some unanticipated interconnection, several supposedly independent safety systems failed simultaneously? What if the operating temperature rose and both the control rods and the pumps that circulate coolant into the reactor failed? What if the turbine failed and pieces of it were hurled off like missiles? What if pipes cracked as a result of stress corrosion?
Such questions could go on endlessly; reactors are so complex that it was always possible to postulate some new combination of events that conceivably could trigger a core melt and a nuclear accident. Since the emphasis had shifted to prevention, prediction of all such combinations of events was critical to reactor safety, and newly suggested combinations of events, while remote, were not something regulators could afford to
rule out. Furthermore, there was no way to dispel lingering doubts about whether all the possible triggering events had been anticipated, whether the capacities of the emergency systems had been estimated accurately, and whether all the ways emergency systems might fail had been examined. Both in practice and in principle, it was impossible to prove that the unexpected would not occur, and so the debates went on.
This open-ended regulatory process may or may not produce added safety gains, but it has proven extraordinarily costly to the nuclear industry. The Nuclear Regulatory Commission (NRC) has ordered repeated modifications of design requirements, both for reactors under construction and for those already in operation, and these modifications have contributed substantially to sharp increases in the capital costs of reactors. As early as 1974, an Atomic Energy Commission study estimated that "reactors scheduled for service in the early 1970s required about 3.5 man-hours/KWe [kilowatt-electric] to construct, whereas those scheduled for service in the early 1980s would require 8.5 man-hours/KWe." A 1979 study by the Departments of Energy and Labor concluded that this trend would continue, and that by the mid-1980s reactor construction would require between 13 and 18 man-hours/KWe. These increases were due in part to reactor design changes intended to further reduce the probability of accidents.
The nuclear industry and critics of the regulatory process argue that many of these costly design changes do not improve safety. In their view, reactors were already safe enough in the 1970s considerably safer than other publicly accepted structures such as large dams, chemical plants, and airports in populated areas. Why, the critics ask, should increasingly unlikely potential causes of accidents be taken into account when the probabilities of serious accidents are already minute? Why make reactors more forgiving of errors when they were already forgiving enough? The inevitable answer is that it is impossible to be sure that reactors are forgiving enough. What if all the important causes of accidents have not been anticipated? What if the capacity of an emergency system has been overestimated? What if safety systems assumed to be independent and redundant in fact are not? What if . . . ?
The debates triggered by the emphasis on prevention were brought to a head by the Three Mile Island (TMI) accident. The reactor at Three Mile Island is a pressurized water reactor. As shown in Figure 1, this type of reactor is comprised of two loops of circulating water. In the primary loop, water circulates through the reactor core, where it is heated. (It does not boil because it is at very high pressure.) From the core, it is pumped to the steam generator, and from there is passes (via tubes) through cooler, lower pressure water and causes this water to boil. The water then circulates back to the core, where it is reheated. The lower pressure water in the steam generator is in the second of the two loops. As it is boiled by the hotter water from the primary loop, it turns into steam, which then is circulated to the turbines (steam runs the turbines which generate electricity for public use). After passing through the turbines, the steam is condensed into water and pumped back into the steam generator, where the cycle repeats.
The Three Mile Island accident occurred in March 1979 and began when maintenance personnel inadvertently shut off water to the secondary loop. This began a series of normal safety measures:
1. The loop's water pump and the turbine automatically shut down (not an uncommon event);
2. This triggered an automatic shutdown of the chain reaction in the reactor (also an unremarkable event).
Even though the chain reaction ended, the decay of fission products in the reactor core continued to give off heat (as it always does). So, to remove heat from the reactor core,
3. A backup pump went into operation to circulate water in the secondary loop; and
4. A pressure relief valve opened in the primary loop (a standard measure to prevent overpressurization).
Ordinarily, these steps would have taken care of the problem. The primary loop, which was still operating, would have brought water heated by the decay of fission products to the steam generator; the water in the secondary loop, circulated by the back-up pump, would have removed the heat. The reactor would have been restarted after a brief shutdown.
But another error occurred. The pressure relief valve, which was supposed to close automatically, remained open. Even worse, the control room instruments indicated to plant operators that the valve had closed a fourth error. At this point serious problems began. Since the valve stayed open, pressure in the loop fell and water began to boil away through the open valve. If enough water boiled away, the reactor fuel would become exposed and parts of the fuel assembly would begin to melt and oxidize. The open valve thus created a real threat of damage to the fuel and release of fission products.
In reaction to the loss of coolant and pressure,
5. An emergency cooling system was automatically activated to replace the water that had boiled away and escaped through the valve.
This would have prevented further difficulty, but then a fifth error occurred. Misled by the instruments into thinking that the valve was closed and the reactor pressurized and full of water, the operators turned off the emergency water supply! They thought there was too much water in the reactor and actually began to remove water. All the while, water continued to escape and pressure continued to fall. By now, a considerable amount of steam had accumulated in the primary loop, making the pumps still in operation begin to vibrate, so the operators turned off these pumps. This sixth error further reduced the heat removal capacity of the system (since the circulation of the primary loop had been removing at least some of the heat from the core).
It was not until over two hours after the accident began that the operators finally realized that the valve was in fact open and that there was too little, not too much, water in the core. They then shut the valve and flooded the core with emergency
coolant. But by this time, water covering a portion of the fuel had boiled away. The zirconium alloy (which sheathes the actual fuel) and other materials in the fuel assembly melted and oxidized, becoming quite brittle. Some of the fuel itself appears to have melted, but there is debate on this point. (Uranium oxide, a ceramic, has a very high melting point, so it can withstand a loss of coolant longer than other parts of the fuel assembly.) What is clear, however, is that when the embrittled fuel assembly finally was flooded again, a large segment of the core shattered and a substantial quantity of fission products were released. Most of these, however, were trapped by the containment building, as nuclear designers had planned.
The TMI accident is generally considered the worst mishap in the history of the U.S. nuclear industry. As such, it provided a good test of how forgiving nuclear reactors are. Yet the implications of the accident are ambiguous.
On the one hand, it certainly demonstrated that reactors are forgiving of errors. Maintenance errors touched off the incident, and the stuck pressure relief valve helped turn this occurrence into a major emergency. There were operator errors during the accident shutting down the emergency cooling system, removing water from the primary loop, and shutting down the pumps in the primary loop. And there was an error in the original design the instrumentation that led operators to believe the reactor was full of water when it was not. Despite all these errors, emergency systems were still available to prevent more serious consequences.
Moreover, the health consequences of the accident have been judged about as severe as a car accident. In the words of the Kemeny Commission (appointed by President Carter to investigate the accident), the levels of radioactivity released in the accident "will have a negligible effect on the physical health" of the population. Furthermore, according to widely accepted analyses performed by several groups, even if the accident had continued until the core had melted through the reactor vessel and the containment floor (the "China Syn-
drome"), the consequences still would not have been severe. The Kemeny Commission reported that: "even if a meltdown had occurred, there is a high probability that the containment building and the hard rock on which the TMI-2 containment building is built would have been able to prevent the escape of a large amount of radioactivity." That is to say, containment probably would not have failed.
The analyses also show that the TMI accident probably would not have continued long enough for a complete meltdown. The stuck valve through which water was escaping remained undetected for over two hours, but it would have taken "dozens of hours" for the fuel to melt through the reactor vessel and containment floor. Throughout that time, "restoration of water . . . by any means, with or without closure of the [stuck] relief valve would [have] stop[ped] progress of the damage or melting." Water could have been restored by a variety of independent mechanisms provided by the conservative reactor design. These included a high pressure water injection system, a core flooding system, a containment spray, and containment coolers. And despite the fact that the operators failed to assess the problem for over two hours, they would have had "many more 'observables' available to them had the accident progressed further." That is, if the accident had continued, there would have been many indications that the core was melting and that it therefore should be flooded. So the accident appears to have demonstrated that reactors are extremely forgiving of errors, as the AEC and NRC had planned.
On the other hand, the accident called into serious question the safety strategy based on prevention. By the 1970s core melts and containment of their effects no longer were even considered in the reactor licensing process because nuclear regulators assumed that meltdowns would be prevented. Yet at Three Mile Island, part of the core seems to have melted and fission products were released. This supported the argument made by those skeptical of the prevention strategy. To prevent errors from triggering core melts, all credible sequences of events leading to melts must be anticipated. Yet the sequence of the TMI accident had not been foreseen. How
were regulators to know in the future whether all possible accident sequences had been anticipated?
The TMI accident inspired a series of reviews of the nuclear regulatory system by Congress, the Nuclear Regulatory Commission, the nuclear industry, and the state of Pennsylvania, as well as the independent, President-appointed Kemeny Commission. The reviews recommended a variety of changes and emphasized two potential causes of core melts that had heretofore received insufficient attention. One of these causes was operator error; prior to the TMI accident, regulators had directed most attention to design errors rather than operator errors. Minor malfunctions, such as stuck valves, also had been underemphasized. Attention had been focused on relatively improbable, severe malfunctions, such as a two-sided break in the largest cooling pipe. Regulators had assumed that if reactors were designed to prevent serious malfunctions, this would also prevent less serious malfunctions.
Because the TMI accident demonstrated that even minor malfunctions and operator errors could lead to core melts, it was necessary that the potential for such errors be examined more carefully in future reactor design and regulation. These recommendations were consistent with the pre-TMI strategy of prevention and thus generated little opposition.
A second set of recommendations was more controversial. It included proposals for a return to remote siting, systems for filtering and venting gas that might build up in the containment during a core melt, core catchers, emergency evacuation procedures, and distribution to the nearby population of potassium iodide pills to counteract the effects of radioactive iodine that might be released into the environment by a core melt. Each of these measures was intended to contain or otherwise mitigate the effects of core melts a step that many postaccident reviews deemed necessary because a portion of the core was damaged in the TMI accident.
The NRC did not expect these new containment measures to perform the same function as the containment strategy of the late 1950s and early 1960s. The earlier approach was based on the expectation that containment would withstand the effects of serious core melts; no such assumption was made after
TMI. For large contemporary reactors, there still could be no guarantee that containment could withstand a core melt. At best, the proposed measures would reduce the probability that radiation released in a core melt would escape into the environment. Vented containment and core catchers would reduce the probability that containment would fail; remote siting and emergency planning would reduce the probability that large numbers of people would be exposed to escaped radiation if containment failed; potassium iodide pills would reduce the probability that people exposed to radiation would develop cancer of the thyroid.
The fact that these recommended containment measures could not entirely prevent serious public exposures to radiation made them vulnerable to the same problems that plagued earlier prevention efforts. If the probability that core melts would lead to public radiation exposure was to be reduced, when would it be sufficiently reduced? Reactor systems are so complex that there might always be additional design changes that would further reduce the probability of a core melt. In the event of a meltdown, would the preparation of emergency evacuation plans be sufficient? Or would it be necessary to have emergency evacuation plans as well as remote siting plus systems for venting containment? Or would all of these be necessary plus core catchers, larger containments, and smaller reactors? Furthermore, serious doubt existed among some observers about whether any of these changes were really necessary. The changes were being proposed in the wake of an accident that had demonstrated the forgiving nature of reactors. Perhaps reactors were already safe enough. If not, when would they be?
The nuclear community has been struggling with this issue for over a decade, and it seems no closer to a resolution now than it was originally. As will be discussed in detail in the concluding chapter, there was a significant effort in the early 1980s to establish a safety goal a point at which reactors would be deemed safe enough. Such a goal was established, but it did not resolve the difficulties. Ironically, this effort fell victim to the same uncertainties that created the need for a safety goal in the first place: even if agreement could be
reached on an acceptable level of risk, how would regulators know that a given reactor had achieved that level? How could they be sure that they had correctly anticipated all the significant possibilities leading to an accident?
At the outset of the nuclear era, the combination of high uncertainty and potential for catastrophe created a serious dilemma for nuclear regulators. They were confronted with a technology so complex that errors in reactor design, construction, and operation were virtually certain to occur. At the same time, they were confronted with the possibility that such errors could lead to intolerable consequences. The regulators overcame this dilemma by requiring that reactors be forgiving of errors. If errors in design, construction, and operation were inevitable, then the best that could be done was to require that reactors be designed so as to make it unlikely that the errors would actually lead to the intolerable consequences.
Unfortunately, in overcoming this first dilemma, regulators created a new and perhaps more intractable dilemma: how forgiving of errors should reactors be? The technology is so complex that there is always one more method possible for reducing the likelihood that an error will trigger serious consequences and one more sequence of events that conceivably could lead to these consequences. So how safe is safe enough?
The pointedness of this question for nuclear power helps in understanding the dilemma of toxic chemicals regulation. In both cases, regulators were confronted by uncertainty combined with a potential for catastrophe. In both cases, regulators confronted the dilemma and devised deliberate strategies for coping with it. The strategies were quite different, which in itself is a measure of the intelligence of the process: regulators were adapting their strategies to the differences in the problems they faced. And in both cases, in an attempt to overcome the first dilemma, regulators discovered a new dilemma: when had their efforts gone far enough?
In setting priorities for toxic chemicals regulation, why focus on the top fifty chemicals? Why not sixty, or one hundred? When conducting premanufacture screening for new chemicals, how toxic must a chemical be to necessitate restrictions or outright prohibition? Now that new techniques can detect chemical traces at the parts per billion or trillion level and questionable chemicals can be detected throughout the environment and in many consumer products, how much of a toxic substance is acceptable? This is precisely the same problem faced by the nuclear regulators as they realize that containment can no longer be guaranteed. What level of risk is acceptable? And how can regulators be sure that that level has in fact been achieved?
We explore this problem in the concluding chapter. For the moment, suffice it to say that no satisfactory strategy has yet emerged to address this new dilemma.