Love Canal led to widespread concern over improper disposal of toxic
substances, and the 1984 disaster at Bhopal, India, spotlighted the risks of chemical
manufacturing plants. But manufacture and disposal may actually be easier to regulate than
the daily use of chemicals by millions of people throughout the economy. This chapter
examines how the U.S. government, scientists, environmentalists, and industries have
worked (and failed to work) to circumvent disaster from careless use of toxic chemicals.
Regulation of chemicals offers an excellent test of society's ability to
avert potential catastrophes. Beginning with the industrial revolution and increasing
sharply in the twentieth century, technological societies began to introduce into the
ecosystem chemicals with unknown consequences for natural systems and human health. The
quantities of chemicals introduced are staggering: U. S. production of synthetic organic
chemicals has escalated from virtually zero in 1918 to more than 228 billion pounds
annually. Reliance on inorganic chemicals such as asbestos also has increased
significantly. In total, there are more than sixty thousand chemicals now in use.
Until recently, the primary approach to regulation has been a
trial-and-error process. Few restrictions were placed on the production and use of
chemicals. Judgments about the purposes to which a chemical should be put, the manner and
frequency of its application, and its potency were left largely
unregulated until actual experience provided evidence of serious risk.
But a more deliberate approach to protecting against potential hazards now has emerged.
The two most important strategies are to test new chemicals before they come on the market
and to set priorities for regulating toxic substances already in use.
Learning by Trial and Error-
The essence of learning from error is to try something, observe the
outcome, and try something new to correct any undesirable results. The regulatory history
of pesticides is one of learning from error; the central theme in this history is the
emergence of feedback about errors and society's response to this feedback. Two of the
main types of feedback resulted from environmental problems and human health concerns.
(The term "feedback" in this volume refers to the process whereby errors in a
policy or course of action become apparent.)
Effects on the Environment
Beekeepers began to notice damage to their bee populations soon after
the introduction of inorganic pesticides in the 1870s. Because their livelihood depended
on pollination of their crops, orchardists were keenly interested in the beekeepers'
problems. Although initially skeptical, people paid attention to the beekeepers' claims,
and early entomologists carried out simple tests confirming the allegations. In one such
test, a tree and its bees were enclosed in netting and sprayed as usual; a high percentage
of the bees died. By the 1890s agricultural extension experts were advising farmers to
delay application of pesticides until after trees had finished blossoming-and orchardists
quickly followed the advice.
Another example of negative feedback first appeared in the 1880s when
London Purple supplanted Paris Green as the favorite insecticide of American
agriculturalists (the active ingredient in both was arsenic). A waste byproduct of the
British aniline dye industry, London Purple was so highly toxic that it
actually harmed the plants to which it was applied. When experience
with lead arsenate (developed to fight gypsy moths in 1892) demonstrated that plant burn
was not inevitable, a combination of market forces and governmental action gradually
steered pesticide developers toward chemical preparations less destructive to plants.
Recurrent complaints about illnesses and deaths of livestock were a
third source of learning. Incidents that were investigated appear to have been accidents
caused by careless use or mislabeling, rather than from correct application. Even when
negative feedback is misinterpreted in such cases, it can still prove useful. While these
incidents did not reveal the errors originally supposed (that normal use of pesticides was
a danger to livestock), the controversies raised consciousness about the possibility of
real dangers, and this stimulated development of scientific testing methods.
The possibility of damage to soil fertility was perceived almost
immediately after the introduction of inorganic insecticides in the 1860s. A few early
tests accurately indicated cause for concern, but other tests showing more reassuring
results got wider publicity and acceptance. Some farmers and agricultural experts issued
recurrent warnings, such as this one from an 1891 issue of Garden and Forest:
"Hundreds of tons of a most virulent poison in the hands of hundreds of thousands of
people, to be freely used in fields, orchards and gardens all over the continent, will
incur what in the aggregate must be a danger worthy of serious thought." There was bitter opposition to use of new chemicals in many rural
areas. Nevertheless, inorganic pesticides won steadily increasing acceptance as a standard
part of agricultural practice, apparently because the immediate feedback (increased usable
crop yields) was positive.
By the 1920s, however, soils in some orchards had accumulated
concentrations of arsenic as high as 600 parts per million (ppm), more than forty times
the amount that occurs in nature. Newly planted trees were difficult to keep alive, and
crop yields declined. For example, in the soil of some rice-growing areas of the
Mississippi Valley, high levels of arsenic remain from past pesticide applications,
causing rice plants to abort
and resulting in poor crop yields. The one positive result is that
such damage helped stimulate research on other pesticides to replace lead-arsenic
The damage to wildlife caused by insecticides drew public attention
when, in 1962 in Silent Spring, Rachel Carson pointed out the high economic and
aesthetic costs of DDT and revealed that other new persistent insecticides were killing
birds, fish, and other wildlife. She quoted startling statistics
showing pesticide concentrations over 2,000 parts per million in California waterfowl;
Michigan robins killed by DDT had 200 ppm in their oviducts, and their eggs were
contaminated. Even though there was no standard for evaluating
such findings, most readers were shocked. Moreover, Carson documented hundreds of separate
incidents of fish, shrimp, oysters, and other valuable aquatic organisms killed by
dieldrin, endrin, DDT, and other chlorinated hydrocarbon pesticides; the largest kills
each totalled over one million fish.
The emergence of pesticide-resistant insects offered further evidence of
error. At least eight instances of pests becoming resistant to insecticides were recorded
prior to 1940. Houseflies became resistant to DDT in Sweden by 1946, just two years after
the chemical's introduction there. By the mid-1970s over three hundred species of pest
arthropods had developed resistances to one or more pesticides; some were resistant to as
many as a dozen different chemicals.
Because this was a major problem for the agricultural sector, corporate,
government, and university scientists began intensive research on how insects develop
immunity. Resulting knowledge about insects and the biochemistry of pesticides led to
improved agricultural policy. For example, scientists developed the concept of selection
pressure, which holds that the more frequent the spraying and the more persistent the
pesticide used, the more rapid the development of resistance in the pest population. This
concept and the resistance problem led to a search for less persistent pesticides and to
efforts to develop biological control methods intended to reduce agricultural losses by
affecting pest fertility, bypassing insects' chemical defenses.
Human Exposures and Responses
Because they sometimes were visible and drew consumers' attention,
pesticide residues on fruits and vegetables were a prime source of feedback and learning
about the timing and advisable limits of chemical spraying. Early experimenters generally
agreed that the risk of immediate poisoning from residues was quite small, but it was not
until 1887 that the possibility of chronic illness from cumulative exposures was
Several well-publicized incidents in Britain between 1891 and 1900,
sensationalized by the media, directed the attention of medical and governmental
authorities to the problem of chemical residues. This led to the establishment of British
and world tolerances (levels generally accepted as safe) for arsenic residues.
In the United States, seizures of contaminated produce by local
governments sparked the beginning of serious regulation. In 1919 the Boston City Health
Department began a series of seizures of western apples and pears, some contaminated with
more than twenty times the residue levels considered acceptable in world commerce. The
Bureau of Chemistry in the U. S. Department of Agriculture (USDA) made a decadelong
attempt to educate American growers about the problem but met with little success. In 1925
southern New Jersey and Philadelphia experienced an epidemic illness that newspapers
attributed to spray residues on fruits. These claims turned out to be incorrect, but
federal inspectors did find apples with very high residue levels. When New Jersey growers
refused to clean the affected apples, the first actual seizure and destruction of produce
under the Food and Drugs Act of 1906 took place.
In late 1925 and early 1926 British newspapers published nearly a
thousand cartoons, articles, and editorials lambasting arsenic-contaminated American
fruit. The incident started when a family became ill from arsenic poisoning caused by
imported U.S. apples; subsequent inspections revealed contaminated American fruit
throughout Britain. The British government threatened a complete embargo on U.S. produce,
causing the U.S.-based International Apple Shippers Association to take
measures limiting arsenic levels on export fruit. Produce for domestic
consumption in the United States also gradually improved owing to improved washing
techniques and longer delays between spraying and harvest. Nonetheless, residue levels
remained higher in the United States than those allowed in Britain because of the strong
farm lobby in Congress.
Several lessons were learned from this case. The concept of tolerance (a
level of poison that most people could consume daily without becoming ill) was developed
and gradually incorporated into legal standards. Dissatisfaction with initial enforcement
of these standards led to stricter enforcement, which led to improved techniques for
washing fruit and other methods for keeping residue levels close to legal standards.
Finally, various farmers' organizations began to demand the development of insecticides
that would be as effective as arsenic but less toxic.
Knowledge and regulation of pesticides also increased as a result of
data on human exposures. While the average person had a DDT level of 5 to 7 parts per
million (ppm) in the late 1950s, farm workers were found to store an average of 17.1 ppm,
and the highest recorded level, 648 ppm, was found in an insecticide plant worker. These
figures approximately doubled in the 1960s. Although laboratory
evidence showed that minute concentrations of pesticides could inhibit human enzyme and
oxidation processes, there was no solid evidence that these changes would lead to serious
human illness. Some methodologically weak studies even showed that high doses were safe. Nevertheless, statistics on occupational exposure levels, like
those on insecticides' effects on wildlife, alarmed many people.
In 1974 the Environmental Protection Agency (EPA) approved the pesticide
leptophos for use on lettuce and tomatoes, despite evidence suggesting that leptophos
caused nervous disorders. When workers in a Bayport, Texas, plant that manufactured the
chemical experienced severe nervous disorders, EPA quickly rescinded its approval, after
the media publicized the incident, and the plant ceased production of the pesticide.
In 1975 workers at a Hopewell, Virginia, chemical manufac-
turing plant owned by Allied Chemical were found to suffer from brain
and liver disorders and from other serious ailments caused by the chemical kepone.
Investigation revealed that the plant had been illegally discharging dangerous effluents
into the James River for the previous three years. As a result, the river was closed to
commercial fishing for several years, and Allied Chemical was fined $5 million and
required to donate an additional $8 million for environmental research.
These incidents were significant in themselves, and contributed to
tightening occupational health safeguards in the pesticide industry. More generally, the
kepone and leptophos problems directed media, interest group, and congressional attention
to the toxic substances problem.
Results of Trial and Error
The use of trial and error has been more effective in the regulation
of pesticides than we initially believed possible. There have been many errors, much
feedback about them, and numerous efforts to learn from these errors. The lag time between trial and feedback has been long, but
this has only slowed rather than prevented the learning process.
The result is that most of us appear to be safer today from pesticides
than we were a generation or two ago. The currently used carbamate and organophosphate
pesticides are much less persistent, and therefore much less dangerous to consumers'
health and the ecosystem, than were the chlorinated hydrocarbon and arsenic-lead-fluorine
pesticides. Levels of DDT and other persistent pesticide residues in food
came uncomfortably close to the accepted tolerance limit in 1970. In contrast, recent
readings on levels of the organophosphate chemical malathion show expected daily intake in
the United States to be less than 1 percent of the tolerance limit. Residue levels for
carbaryl (the major carbamate pesticide) are even lower. Not
everyone accepts such official standards of safety, but most of the trends are reassuring.
Even though the trial-and-error method has worked to a considerable
extent for pesticides, the strategy is clearly of limited utility-particularly when
considering the larger uni-
verse of chemicals. Only about six hundred different chemicals are
used in contemporary pesticides, and less than a third of these predominate. Therefore, it
is much easier to monitor feedback about them than to keep track of all sixty thousand
chemicals in use today. Moreover, we cannot say that trial and error has worked well
enough, even for pesticides. The harm has been substantial, and perhaps partially
Revelations of damage from various types of chemicals, the gradual
emergence of the field of toxicology, and popular books such as 100,000,000 Guinea Pigs
in the 1930s and Silent Spring in the 1960s, prompted doubts about the
trial-and-error process. Slowly, more deliberate strategies began to emerge to supplement
trial and error.
Early Steps to Supplement Trial and Error
The first federal laws dealing with toxic chemicals were the 1906 Food
and Drugs Act and the 1910 Federal Insecticide Act. Both laws
were based purely on trial and error: they gave federal agencies authority to seize an
adulterated substance only after it had been marketed in interstate commerce. The
government then had to prove in court that the product exceeded legal toxic limits.
The 1938 Food, Drug, and Cosmetic Act was the first law to mandate a
major change in strategy-testing of substances before they were sold. The intention
was to obtain information about ineffective or dangerous chemicals before dangerous
effects could occur. (This applied only to pharmaceuticals.)
The 1947 Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA)
extended the advance testing requirement to pesticides. It required registration with the
Food and Drug Administration (FDA), prior to marketing, of any pesticide products sold in
interstate commerce. To obtain such registration, manufacturers had to submit the results
of toxicity tests on their pesticides. The effect of this legislation was to shift to
manufacturers part of the burden of proving the safety of pesticide products sold in
interstate commerce. Previously, FDA had been forced into onerous litigation and required
the burden of proving in court that a pesticide was sufficiently
dangerous to justify removing it from the market. The new requirements on manufacturers
thus helped reduce the probability of dangerous chemicals remaining on the market. FIFRA
also represented a first step toward another new strategy-adding an element of caution to
the way in which chemicals are introduced and used in society.
The Delaney Amendment of 1958 and the Color Additives Amendments of 1960
represented the clearest examples of this conservative strategy. More than any previous
legislation, the Color Additives Amendments put the burden of proving safety squarely on
the manufacturer. The Delaney Amendment specified that no
chemical additive shown to cause cancer in laboratory animals could be added to food. It
instructed FDA to accept animal tests that may not always be valid for humans and to treat
even mildly carcinogenic substances the same as potent ones. If there are to be errors in
how society introduces and uses chemicals, the 1958 law implied, it is better to err on
the side of safety. However, as we will discuss in chapter 8, this attempt to impose
caution proved too conservative to be workable.
The 1954 Pesticide Chemicals Amendment introduced a third strategy. It
empowered the Department of Agriculture and FDA for the first time to ban the use of
excessively dangerous pesticides. Complaints abounded, however, that the existing system's
procedures stifled effective action by the regulatory agencies. So FIFRA was amended
repeatedly, each time easing procedures for banning or limiting pesticides the regulatory
agencies considered too dangerous. For example, a controversy over the herbicide 2, 4,
5âT led the director of the White House Office of Science and Technology to
complain in 1970 that "there is not sufficient flexibility [in the
laws] . . . to allow the government to take action" expeditiously when
new information reveals unforeseen health hazards. The 1972
Federal Environmental Pesticide Control Act partially eased this difficulty, reducing the
required burden of proof that a pesticide posed an unreasonable risk. The act allowed EPA
to block registration of a pesticide as long as evidence does not clearly demonstrate that
benefits outweigh risks.
The 1972 act also divided pesticides into categories, corresponding
roughly to prescription versus nonprescription drugs. In an effort to guard against errors
due to incompetent application, use of the more dangerous pesticides henceforth could be
restricted to individuals and firms certified by EPA. The act also provided for indemnity
payments to manufacturers of pesticides that EPA orders off the market. This provision
dilutes opposition to banning dangerous chemicals and thus makes regulatory action easier
and potentially quicker.
Strategies for New Toxic Substances
The early trial-and-error process in the use of toxic chemicals, then,
was followed by a trial-and-error process in regulation. The laws became increasingly
comprehensive, and the regulatory strategies became increasingly sophisticated from 1938
to 1972. But the most significant improvements on trial and error were not developed until
the Toxic Substances Control Act (TSCA) of 1976.
The process eventually leading to TSCA began with a 1971 report on toxic
substances by the Council on Environmental Quality. Approximately two million chemical
compounds were known at that time, and some two hundred fifty thousand new ones were being
discovered each year. The great majority of such compounds remained laboratory curiosities
that never entered commerce, but approximately one thousand new compounds were believed to
be entering the market annually during the late 1960s and early 1970s.
An estimated 10 to 20 percent of these new compounds posed environmental threats, as an
EPA official later testified to Congress. If this figure was
correct, it implied that up to two hundred new environmental hazards might be created each
Legislators, environmentalists, and even the chemical industry
recognized significant shortcomings in existing laws about toxic substances. Congressional
committee reports and floor debates made extensive references to fluorocarbons (chemicals
used in spray cans and refrigeration equipment); these chemicals had recently been found
to pose a threat to the ozone
layer. Also prominent in these discussions were recent incidents
involving mercury, lead, vinyl chloride, kepone, and PCBs; the last (widely used as
lubricants in electrical equipment) were the only chemicals specifically singled out for
special treatment in TSCA. Decision makers also were alarmed by emerging information about
environmental sources of cancer (some of it exaggerated); for example, a Senate committee
was impressed by "estimates that 60 to 90 percent of the cancers occurring in this
country are a result of environmental contaminants. . . . The industrial
centers, where industrial chemicals are obviously found in largest concentration, had the
highest incidence of cancer."
A central provision of TSCA requires manufacturers to submit
premanufacture notices to EPA for each new chemical at least ninety days before commercial
production. EPA has the authority to require manufacturers to undertake whatever toxicity
testing the agency considers necessary, and EPA is required to ban the manufacture of
those new chemicals that present an "unreasonable risk."
A primary motivation behind TSCA, evident throughout the hearings and
floor debates, was the desire to prevent excessively dangerous chemicals from being
introduced into use-rather than waiting for their negative effects to be observed before
removing them from use. As the Senate Commerce Committee put it: "Past experience
with certain chemical substances [illustrates] the need for properly assessing the risks
of new chemical substances and regulating them prior to their introduction." TSCA, the committee said, would "serve as an early warning
system." Senator John Tunney (Democrat, California) reiterated the belief that the
premarket screening provisions "will assure that we no longer have to wait for a
body count or serious health damage to generate controls over hazardous
The Senate Commerce Committee also acknowledged the social and political
consequences of the time lag between introducing a chemical and recognizing its negative
longer the delay in realizing dangers, the more reliant industry
becomes on a particular chemical. As the committee report noted, it is prior to first
human suffering, jobs lost, wasted capital expenditures, and other
costs are lowest. Frequently, it is far more painful to take regulatory action after all
of these costs have been incurred. For example, . . . 1 percent of our
gross national product is associated with the vinyl chloride industry. Obviously, it is
far more difficult to take regulatory action against this [carcinogenic] chemical now,
than it would have been had the dangers been known earlier when alternatives could have
been developed and polyvinyl chloride plastics not become such an intrinsic part of our
way of life in this country.
As a result of TSCA, manufacturers are now legally required to
demonstrate the safety of new chemicals, just as they are for pharmaceuticals, food
additives, and pesticides. Any negative evidence, even the sketchiest, may be legally
sufficient to keep a new chemical off the market. In practice
some risks are considered acceptable if the projected benefits are significant, but
uncertainties, if the decision is close, tend to weigh against the side that bears the
burden of proof. So TSCA makes strict regulation easier.
Mechanics of Premanufacture Notification
The premanufacture notification (PMN) system ensures that EPA will
have considerable information about a chemical's molecular structure, anticipated
production volume in the first few years, by-products, exposure estimates, results of
toxicology testing, manufacturing methods, and disposal techniques. EPA can require that
industry conduct virtually any tests considered necessary to evaluate a new chemical's
safety. Moreover, TSCA grants EPA more authority than ever before to act on the basis of
EPA's review process begins with a structure-activity team of scientists
who assign a toxicity rating to each chemical; another group of scientists rates the
degree to which individuals and the environment are likely to be exposed to the
chemical. If exposure is not rated high and health effects and
ecological concerns are all rated low, the chemical passes premanufacture screening.
Otherwise the chemical moves to third, fourth, and fifth levels of consideration, with
each stage involving higher levels of decision makers and subjecting the chemical to
Initially, the total number of PMN notices submitted was far below the
expected amount. This seemed to indicate that industry was launching fewer new chemicals
because of the new regulatory requirements. But the number of PMN submissions increased
steadily in the early 1980s and leveled off in the range of 1,200 to 1,300 per year.
Surprisingly, only about three chemicals out of every ten
processed through the PMN system have entered commercial production.
According to EPA staff, the chemical companies "invest" in the PMN statement as
a stage in research and development, that is, well before a decision has been made to
market a chemical. "As soon as prospects for marketing loom on the horizon, they get
the PMN in so that marketing will not be held up if the company does decide to go ahead
with it." (Some of the submitted PMNs may yet come to
market and thus increase the current rate.)
How carefully PMNs are reviewed depends partly on the amount of staff
time available for the task. By 1985 there were an equivalent of 125 professional staff
and 14 support staff assigned to full-time work on the PMN system. This represented an
increase of approximately 21 percent over professional staffing levels of fiscal 1981 and
a decrease of about 14 percent in support staff. Meanwhile, expenditures on the PMN
program declined approximately 15 percent in real dollars between 1981
and 1983. While budget allocations and staffing levels changed
moderately, PMN submissions increased substantially. At the 1984 submission rate of 1,250
PMN notices per year, just over one work month per staff member can be devoted to each new
While the PMN program has fared better than other programs at EPA in
budget battles (and no doubt efficiency has improved since the program went into full
operation in 1981), it is questionable whether the current budget is adequate for the
existing workload. The fact that only three chemicals out of every ten processed by EPA
enter commercial production exacerbates the problem. In effect, scarce EPA time and talent
are being "wasted" on chemicals that companies never bring to market.
Like most laws, TSCA is changing during implementation. The legislation
explicitly provided authority for EPA to waive PMN requirements for certain classes of
chemicals that are deemed to pose acceptably low risk and, as a result, numerous requests
for exemption have been submitted. The one that would cover the most chemicals came from
the Chemical Manufacturers Association in May 1981. It sought exemptions for high
molecular weight polymers, low-volume chemicals of all kinds, and chemicals that are used
only as production intermediates and that remain entirely on the premises of a chemical
factory. The Dyes Environmental Toxicology Organization made a similar request, and also
asked that EPA shorten the review period for various dyes and dye intermediates.
EPA has granted the bulk of the requested exemptions. Even though
manufacturers of an exempted chemical still must notify EPA, there will be less paperwork,
and manufacturing can commence at any time, as long as notice is filed fourteen days prior
to actual marketing. There are significant exclusions and restrictions in the exemption
process that are still unsatisfactory to manufacturers, however.
Exemptions to PMNs are obviously advantageous to the chemical industry.
But given the large number of new chemicals and the even larger number of PMNs, exempting
certain chemicals may be a sensible way to adjust regulatory strategy
in that it may help concentrate attention on the more dangerous
chemicals. Scientists consider high molecular weight polymers to be relatively nontoxic,
and EPA is following the weight of scientific judgment in exempting them from PMN
scrutiny. Exempting low-volume chemicals and site-limited intermediates represents a
regulatory judgment that the costs of review outweigh the risks of no review. However, in
the case of exemptions, only experience can tell whether it is a good idea; but it is a
Evaluating the PMN System
Evaluation of the PMN system is impeded by the degree of expertise
necessary to judge the scientific quality of EPA's decisions. Evaluation is even further
complicated by the very high percentage of PMN submissions that omit significant
information because manufacturers claim confidentiality. Approximately 50 percent of PMNs
contain at least one claim of confidentiality on chemical formula, name of manufacturer,
intended uses, tests performed, amounts to be manufactured, or other information, and some
PMNs claim that everything about the new chemical is confidential. The General Accounting
Office and the Office of Technology Assessment-both exempt from the confidentiality
restrictions-have begun to study the implementation of TSCA, but their reports cannot
divulge any confidential information on which their conclusions may have been based.
It is clear, however, that the new system already has deterred
production of some new chemicals. For instance, one manufacturer withdrew a PMN notice in
April 1980 and did not manufacture six new plasticizers because EPA ordered a delay on
production. The agency had required the manufacturer to develop and supply additional data
on the chemicals' dangers. But some industry toxicologists question whether these
plasticizers were more dangerous than those already on the market.
Detailed review and regulatory action against new chemicals have been
relatively rare as a percentage of PMN submissions. Only eighteen (3 percent) of PMN
detailed reviews in 1981; the number increased in 1982 to fifty (6.25
percent). The Office of Toxic Substances initiated eleven "unusual actions"
during 1981 and thirty-one during 1982. These included: (1) suspensions of the review
period to allow more time for scrutiny, (2) voluntary agreements under which manufacturers
agreed to restrict the use of their new chemical in some way that EPA found sufficient to
remove it from the category of unreasonable risk, and (3) formal rule-making proceedings
to block manufacture of proposed new chemicals. In addition, six PMN notices were
withdrawn by manufacturers in 1981 and sixteen in 1982; some of these would have been
subject to enforcement action had they continued through the detailed review process.
The number of PMNs held beyond ninety days increased during 1983 and
1984. By early 1985 more than 10 percent of PMNs were being temporarily delayed. Whether
this actually is a result of deeper scrutiny or is merely indicative of a backlog of work
within EPA is difficult to discern. Still, only a very small number (less than 0.4
percent) have been rejected entirely on the grounds that the chemical presents an
unreasonable risk. There are several possible interpretations: (1) manufacturers may be
voluntarily refraining from production of the more risky new chemicals, at least in part
because they expect that the substances would not be approved, (2) the original estimates
that 5 to 20 percent of new chemicals would be dangerous were inaccurate, or (3) the PMN
system is not screening out some of the riskier substances.
Strategies for Chemicals Already in Use
The task of monitoring some three hundred to four hundred new
chemicals each year is difficult enough. But what of the sixty thousand or more existing
chemicals, of which unknown thousands may have negative effects on human health or on the
ecosystem. This task is staggering, and since attention can be devoted to only a
relatively small number of chemicals each year, priorities must somehow be set. One way of
setting priorities is by trial and error: wait for the conse-
quences to become known and then deal with those that emerge soonest
and are most severe. This strategy still is being used in Japan, Germany, and most other
nations, and, as we saw in the case of pesticides, trial and error can be a viable way of
setting regulatory priorities. However, TSCA attempts to improve on the results that could
be achieved through such trial-and-error by imposing a priority-setting process.
TSCA established the Interagency Testing Committee (ITC) "to make
recommendations to the Administrator respecting the chemical substances and mixtures to
which the [EPA] Administrator should give priority consideration."
The committee is instructed to consider "all relevant factors," including:
Quantities likely to enter the environment;
Number of individuals who will be exposed;
Similarity in structure to other dangerous chemicals.
TSCA limits the total number of chemical substances and mixtures on the
list at any one time to a maximum of fifty, and gives EPA just one year to respond to each
ITC recommendation. Clearly, the intent is to identify and force action on high-priority
testing needs and to keep EPA from being overwhelmed by the sheer size of the evaluation
Structure and Mechanics of the ITC
The ITC is composed of eight formal representatives and six liaison
representatives from a total of fourteen federal agencies, departments, and programs. The
ITC has the equivalent of a staff of about eighteen professionals, most of whom are from
outside consulting organizations. The committee's budget of about $400,000 remained
constant in the early 1980s as did its workload. The ITC meets once every two weeks for a
full day, and most members spend additional time preparing for such meetings. But all
members have heavy responsibilities in their regular agencies, so their ITC work is a
part-time activity. These conditions are not ideal for such demanding
By 1986 the Interagency Testing Committee had issued eighteen
semi-annual reports, naming over one hundred individual chemical substances or classes of
chemicals for priority testing. To arrive at these recommendations, the first step is a
computer search of scientific articles on toxicity, from which is developed a working list
of several thousand potentially dangerous chemicals. These chemicals then are scored on
the basis of production volume and the other criteria listed above. The highest scoring
chemicals are subjected to detailed staff review, and the ITC reaches its decisions on the
basis of a ten- to fifty-page dossier on each of approximately sixty chemicals per year.
The ITC recommends for priority testing those chemicals that combine high exposures with
probable high toxicity. In this process, nearly four thousand chemicals were considered by
1986, of which approximately five hundred were reviewed in detail.
Problems to Date
Testing Classes of Chemicals
Many of the ITC's early test recommendations were for broad classes of
chemicals. Because there are so many chemicals that can pose dangers, the committee hoped
to speed up the testing process by focusing on classes of chemicals rather than on
individual chemicals. But such testing requires that appropriate groupings of chemicals be
identified, and this is nearly impossible. When EPA began investigating how to pursue the
ITC's recommendation on benzidine-based dyes, for example, there proved to be some five
hundred of these dyes that were combined and marketed under a total of twenty-five
thousand different trade names. A single category simply could not encompass these
chemicals' diverse exposure expectations, production volumes, structure-activity
relationships, and other characteristics relevant to testing. A similar problem arose with
priority testing of the organic metallic compounds known as alkyltins,
and, as a result, the ITC's recent testing recommendations have generally been for
How Many People Are Exposed?
One of the main criteria in setting testing priorities is the number
of people likely to be exposed to a chemical. No matter how toxic, a chemical that is
manufactured in small quantities and contained will not create many problems.
Unfortunately, however, available information about exposure levels is minimal.
The only nearly comprehensive data base available in the mid-1980s is
based on a 1972 survey of five thousand workplaces by the National Occupational Health
Survey (NOHS). It relied party on an indirect measurement method that now seems
questionable. For example, because many degreasing solvents contain chlorobenzene, all
employees in workplaces that used such solvents were assumed to have been exposed to this
chemical. This assumption yielded estimates that are now considered by EPA to have
exaggerated exposures by up to 1,000 percent.
The NOHS does not take into account chemical exposures outside the
workplace, yet there is no other source of such information. Nor is there, for most
chemicals, standard scientific literature on exposures. A Chemicals Inventory kept by EPA
contains information on more than fifty thousand chemicals, but it is not updated to
reflect current production or imports, and it was never intended as a means of calculating
probable exposures. As a result, analysis of exposures is the "weakest part of our
analysis-across the board" according to one of the EPA officials responsible for
making decisions about priority testing.
EPA's Backlog of Cases
EPA is required by TSCA to respond to the ITC's priority
recommendations within twelve months of the date they are added to the list, but EPA has
not always met this schedule. As of mid-1980 EPA had proposed responses to only four of
the thirty-three chemicals whose one-year deadline had expired. The Natural Resources
Defense Council, a prominent environmental group, brought suit against EPA in an attempt
to remedy the delays, and the court ordered EPA to develop a plan for timely testing. EPA complied with the order, and was fully caught up on its cases
by late 1983.
Voluntary Agreements with Industry
EPA in 1980â81 decided to negotiate, rather than order,
testing; the agency claims that it can get industry to test more quickly by this approach.
Most ITC members find the arrangement acceptable, as does a study by the General
Accounting Office. But the Natural Resources Defense Council
contends that voluntary testing is a violation of TSCA and that it weakens public
protection. If voluntary testing continues to work
satisfactorily, approximately half of the ITC recommendations to date will have led to
earlier or more in-depth testing of chemicals than would have occurred without such a
Progress and Continuing Problems
A comprehensive analysis of U.S. policy toward toxic chemicals would
necessarily examine many more issues than have been discussed in this chapter, but several
conclusions are evident.
There has been a great deal of improvement in particular facets of the
regulation and use of toxic chemicals as a result of trial-and-error learning. For
example, Paris Green, which was once a severe threat to human health, is no longer used on
fruits and vegetables. Similarly, use has been curtailed of DDT and many other persistent
pesticides; current insecticides degrade into relatively nontoxic components much faster
than those used in 1970, 1950, or even 1920.
Also, significant adjustments to the regulatory system are being made
that should improve on trial and error. As a result of the premanufacture notification
system, some offending chemicals will be screened out prior to introduction. The
Interagency Testing Committee is gradually developing priority-setting procedures that
should help direct governmental attention to the more dangerous chemicals and curtail use
of such chemicals before problems actually arise. While TSCA is administered more laxly
than environmentalists consider warranted, still, it is likely that many risky chemicals
of the future will be spotted early instead of decades after their distribution throughout
the economy and ecosystem.
Legal and institutional innovations have improved the ability of
federal, state, and local governments to cope with toxic chemical problems, and it is a
positive development that, within the past two decades, environmental protection agencies,
major environmental statutes, and environmental groups have come into existence in most
However, these optimistic conclusions must be tempered by four
qualifications. First, since there is a twenty- to thirty-year delay between exposure to
carcinogens and manifestation of cancer, we have not yet witnessed the results of
chemicals used during the past several decades. Moreover, there were approximately ten
times as many synthetic chemicals produced between 1965 and 1985 as in all previous human
history. While the trends do not indicate an imminent cancer epidemic, we must wait for
more time to pass before assessing the toll on human health.
Another qualification concerns priority setting. The effort made to set
priorities is noteworthy-a genuine breakthrough in government's approach to regulation. To
date, however, success has been limited. While responsible agencies are becoming more
proficient at the task of setting priorities, it is too early to tell whether the results
of these efforts will be significant.
Third, we have emphasized repeatedly that a central part of
trial-and-error learning is the recognition of negative feedback. However, the PMN system
is partially insulated from such feedback because of the confidentiality guaranteed to
chemical manufacturers. This may be a predicament with no satisfactory resolution. Forcing
manufacturers to reveal trade secrets would reduce their incentive to innovate and would
increase their incentive to circumvent the system.
Finally, considerable-perhaps excessive-faith in science was displayed
by Congress and by environmentalists who argued for premarket screening of new chemicals.
The ability to make intelligent advance judgments about a new chemical depends partly on
the results of tests for toxicity. But just as important are how a chemical will be used
and the quantities in which it will be manufactured. PMN notices give the original
manufacturer's estimate on these matters, but the uses to which a chemical is put can
change. So the assignment given
EPA is as much a requirement for guesswork on a new chemical's
commercial future as it is for scientific testing of the chemical's dangers. Fortunately,
TSCA established another regulatory process to monitor chemicals that are being put to
significant new uses; but that regulatory process is even less proven than the PMN system.
Our conclusion is that, overall, scientific analysis has not entirely
replaced trial and error in the regulation of toxic chemicals.