|Global Networks and Local Values|
source ref: ebookgln.html
The preceding chapters have sought to provide a framework for understanding how global networks influence local values, political institutions, and ways of doing business, as well as how those networks might be governed. This chapter and the next look more closely at some particular issues--namely, those related to free speech and to the tensions between privacy and freedom of information.
To a certain extent, the selection of these topics is arbitrary. In other chapters, the report has touched on other topics that might reasonably be examined more closely: consumer protection and copyright; the social changes inherent in a networked world; and the shifting boundaries between public and private spaces and the blurring of the line between consumer and producer. Transnational issues could have been added as well: tax policy, customs and tariffs for Internet traffic, and technical standardization are obvious examples.
But free speech and privacy stand out in two respects: they have attracted considerable public interest, and they are characterized by conflict between the two nations that are the focus of this report. Therefore, this chapter and the next will address these issues. The intention is to discuss them as examples of the tensions and challenges that global networks introduce in a society's values, but these are issues with such strong legal overtones that it is impractical to approach them without incorporating legal considerations into the discussion as well.
For both the United States and Germany, freedom of speech is such an important formal value that it is explicitly protected by the First Amendment to the U.S. Constitution and by Article 5 of the German Basic Law. Because of this constitutional protection, legislatures have very little latitude to pass laws that restrict speech. If the legislature, or any other governmental body, moves too far in that direction, individuals in each country can seek relief in the highest court.
This constitutional protection of free speech obligates both government and private parties to tolerate many kinds of expression, regardless of how much it may clash with individual values or with the traditions of the country. Yet, restrictions on speech are common around the world, with many instances of censorship and criminal prosecution for the criticism of government policy. Even in the United States and Germany, policymakers have sought legislation from time to time that would place restrictions on various kinds of speech. Such legislation has usually been struck down as unconstitutional, but the continual efforts made, and the restrictions sometimes allowed, suggest that the right of free speech is not absolute and that some substantive value is being explicitly or implicitly applied to distinguish protected from unprotected speech. This substantive value (or these values) may well be in tension with the formal value of free speech.
Some of those competing values may also be formal ones. For example, the exercise of free speech might directly or almost directly cause physical harm--such as injuries and death resulting from the publication of bomb-building instructions or the psychic trauma of children that might occur as the result of exposure to certain kinds of sexually explicit material. Similarly, one cannot (falsely) shout "Fire!" in a crowded theatre; as Oliver Wendell Holmes noted, "Your freedom ends where my nose begins." Where the connections among formal values are relatively clear and unambiguous--they are not always so--it is relatively easy to make judgments about which one should take precedence. The situation is not so straightforward when substantive values are involved.
Generally speaking, formal values such as free speech establish rights and procedures that enable a society to function effectively and, it is hoped, fairly. But it takes substantive values to provide the glue, the shared outlook that makes a society more than a collection of individuals. If the values under which a society operated were composed exclusively of formal values, normative views of the world, such as the hierarchical, the egalitarian, or the fatalistic, which hold societies together and distinguish them from one another, would be denied any status whatsoever.1
In fact, substantive values do come into play. For example, restrictions on free speech may be the result of seeking balance--the formal value of free speech weighed against the competing claims of certain substantive values. Of course, the notion that a balance is involved suggests that the mere existence of a conflicting substantive value is not a sufficient reason to restrict free speech. The critical question is whether the exercise of free speech violates a substantive value to an unacceptable degree; answering this question entails a value judgment that is not only contentious but often rendered differently in different societies, even those as similar as the United States and Germany. The treatment of two such issues--hate speech and protection of children and adolescents--is discussed in the following sections.
Free speech was an important right long before the advent of the Internet, but there were practical limitations on how well individuals could exercise it to influence their societies. People could find a soapbox in Hyde Park or Union Square, send a letter to the editor, or distribute leaflets.2 But if they wanted to have an impact on public policy or on society at large, they had to go through intermediaries. The Internet brings society much closer to the ideal of a free market of ideas, in that surfacing a wide range of ideas in a public forum, including those disparaged as fringe, is easier than it has ever been before. Nevertheless, limitations clearly remain, and the availability of ideas on a Web site does not assure that everyone will find them or require that everyone access them.
Hate speech can be defined as the willful public expression of hatred toward any segment of society distinguished by a characteristic such as color, race, religion, ethnic origin, or sexual orientation. Hate speech can be particularly debilitating to a society because it attacks an entire group. Thus it threatens the peaceful coexistence of different groups within the population and, ultimately, the stability of the community.
Hate speech is more than merely hurtful; it creates a climate that can lead to depriving certain groups of their civil rights. The danger need not be concrete and immediate; sad experience has shown that the verbal stigmatization of particular groups in a community can build up negative attitudes in the population at large, which can lead to discrimination and may even erupt into violence against the group. Despite the near-universal revulsion to hate speech among civilized peoples, there are significant differences between the United States and Germany in how it is handled.
Two cases, widely reported in the media and described here in Chapter 3, demonstrate the problems created by these differences: the online sale of Mein Kampf (August 1999) and the CompuServe case (May 1998). The first arose from differences in the laws of the two countries concerning what can be distributed, and the second concerned the responsibility of a service provider for the messages transferred through its network. The CompuServe case attracted particular attention in the American press, with headlines like "Germany's Internet Angst," "A 'cyber-coup' for Germany's cyber-cops," "German Net future questioned," and "Efforts to control the Net abuse liberty."
In terms of value balance, the United States gives the formal value of free speech more weight than essentially any substantive value and almost all other formal values. Therefore, attempts to proscribe hate speech using legal remedies such as the criminal code or municipal regulations have invariably been struck down by the Supreme Court, based on the idea that such remedies violate the constitutional right to freedom of expression contained in the First Amendment. Indeed, because Article 20 of the International Covenant on Civil and Political Rights3 required signatory states to agree that "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law," the United States refused to ratify that part of the Covenant. Furthermore, in ratifying the Genocide Convention, the United States made specific reservations to prevent any impact of the Convention on First Amendment rights in the United States.
A measure of the primacy given to the right to freedom of expression is that the First Amendment does not specify any exceptions, and the Supreme Court has been very cautious in allowing any. Over the years, it has developed a strict set of criteria defining circumstances in which some state abridgement of free speech might reasonably be allowed in order to serve other constitutional goals, but the exceptions have been very few.
Proposed government restrictions that are based on the content of an expression have to be capable of standing up to an intense examination called "strict scrutiny." Under this test, restrictions can be justified only if the state is able to show a compelling public interest in doing so. Even then, it has to choose the least restrictive means for achieving the desired aim. Furthermore, if the proposed measures are too vague or too broad, in all likelihood they will be rejected as unconstitutional. In fact and in practice, the strict scrutiny test is equivalent to the initial assumption that any restriction on free speech is unconstitutional.
Government measures aimed at preventing the purely abstract dangers of hate speech, which would certainly encompass most substantive-value concerns, have always been struck down by the Supreme Court because they have not passed the strict scrutiny test. In 1952, the Court did hold, in Beauharnais v. Illinois,4 that the defamation of a group should not fall within the protection of the First Amendment. But even that decision, though never officially reversed and overruled, has not guided subsequent Court action, particularly following Collin v. Smith5 and R.A.V. v. City of St. Paul6 (Box 5.1).
In Brandenburg v. Ohio, 395 U.S. 444 (1969) (per curiam), the Supreme Court held that the First Amendment even protects speech that encourages others to commit violence, unless the speech is capable of "producing imminent lawless action." Thus, arguing that "if the First Amendment protects speech advocating violence, then it must also protect speech that does not advocate violence but still makes it more likely," a three-judge panel of the 9th Circuit Court of Appeals held that a Web site and posters calling abortion doctors "baby butchers" and criminals were protected by the First Amendment. The court stated that "political speech may not be punished just because it makes it more likely that someone will be harmed at some unknown time in the future by an unrelated third party."7
On the other hand, the Supreme Court has allowed exceptions to First Amendment protection when the expression could likely lead to a hate-engendered crime. In such cases, the Court has applied the "Clear and Present Danger Test." Expressions that give rise to a clear and present danger of criminal action, and thus infringe on the rights of some segment of the population, can be forbidden. This exception is called "communications tending to incite lawlessness" or "advocacy of unlawful action."
The German legal system, in contrast to the American system, generally penalizes hate speech. Given the experience under National Socialism and the former German Democratic Republic, the Federal Republic takes the position that a democracy has to be able to defend itself as a political system. There is a particularly strong feeling that it must be able to stop any attempt to reestablish a National Socialist authority. Interestingly, in addition to the resolve of the post-war German generation to resist National Socialism, other countries that fought the Nazi regime and certain ethnic groups (such as those of Jewish descent, who were victimized by the regime) expect this vigilance of Germany. In addition to measures targeted against hate speech, there are also German laws that prohibit the defamation of victims of National Socialist crimes, denial of the Holocaust, wearing of the swastika, and distribution of National Socialist propaganda.
The compatibility of these laws with the constitution has never seriously been questioned, even though in Germany, as in the United States, freedom of expression is an important value. The Bundesver-fassungs-gericht (the German equivalent of the U.S. Supreme Court) says that freedom of expression is simply an inherent aspect of democracy. However, the constitutional right of freedom of expression, as granted in Article 5 Abs. 1 GG, is worded as follows:
Anybody has the right to freely express his opinion in words, written materials, and pictures and to distribute it and to draw information from generally accessible sources without any interference. The freedom of the press and the freedom of broadcasting and film are guaranteed. There is no censorship.
These rights will find their barriers in the provisions of the general laws, the legal provisions for the protection of the youth and the right to personal honor.
The wording of this article is similar to guarantees in other Western European constitutions (for example, Article 10 of the European Convention on Human Rights). There is a good deal of room for interpretation in the words and, particularly in view of the last sentence, a number of circumstances in which this constitutional right can be restricted. Thus the prohibition against hate speech would fall under the category of a general law. Its provisions are viewed as "not directed against the expression of an opinion as such, but that rather serve the protection of a worthy legal value, without consideration of any special opinion (italics added)."8 Forbidding Holocaust denial has been justified by the Bundesver-fassungsgericht as necessary to protect the personal honor of the Holocaust's victims, who might otherwise be viewed as threatened and compromised.9
There are efforts, sometimes driven by actions and interpretations of the European Court of Human Rights, to limit the extent to which the right of free speech can be abridged. For example, the Bundesver-fassungsgericht requires that the conflicting interests be balanced and that there be a consideration of whether there are any less restrictive means available in order to achieve the intended goal. But, in the face of Germany's recent history, it is not surprising that the prohibition of hate speech is regarded as legitimate and appropriate.
The contrasts between Germany and the United States in regard to free speech are relatively easy to understand. The generally high tolerance in the United States for free speech is generally regarded as critical in a highly heterogeneous society--one with a long history of absorbing wave after wave of immigrant groups--to avoiding pressures that might otherwise arise to conform ideologically and culturally. Indeed, guaranteed individual and political liberties have always been one of the attractions of the United States to those forced to leave their homeland for reasons of political repression. Recent history in Germany, on the other hand, has provided a sad lesson in how fast political propaganda and incitement in a relatively homogeneous society can lead to the separation and murder of whole segments of the population. It has led to a broad consensus on the need to place limits on freedom of expression in order to preserve freedom generally.
This practical explanation raises the question of whether it is fair to characterize the American situation as one in which the formal value of free speech dominates any consideration of substantive values or whether the commitment to diversity, which free speech facilitates, is itself a substantive value. In the latter case, societal cohesion and individual liberty both support the idea of free speech, giving added weight to its protection. In the German situation, there is warranted concern that the shared substantive values--protection of the rights of minorities and the dignity of individuals--may be threatened by an unequivocal commitment to free speech; so the balance between the two plays out differently.
Both the United States and the Federal Republic are deeply concerned with protecting children and adolescents, and both have established laws in that spirit.10 Those that deal with material in print, film, or electronic media are of two basic kinds. First, there are laws aimed at preventing abuse and maltreatment, which make it illegal to distribute, purchase, or possess written materials, videos, and other items that depict child pornography. The argument is that such material is a stimulus to carrying out the acts depicted, and that it leads producers to abuse children in the course of its production.
It is no surprise, then, that on both sides of the Atlantic, legislatures have proscribed child pornography in every format and venue. The distribution of child pornography through the Internet, as well as its possession, is a criminal offense. Even images that have been created by computer or drawn, where children are obviously not involved in production, may be illegal.11
In neither country have constitutional concerns been seriously raised about these laws. In the United States, they meet the strict scrutiny test. In Germany, although the contents of child pornography are, in principle, protected by the Constitution, child and adolescent protection has been recognized as a legitimate basis for outlawing it.
The second area of law related to protecting minors aims at preventing them from being exposed to material that might be psychologically traumatic or might adversely affect their development. This is the more difficult area of the two. Much of the material is itself not considered innately harmful and, therefore, is not proscribed; the practical question is how to specifically control only the inappropriate material, and how to accomplish that without interfering with those who have a right to receive it. Here the balancing of rights comes into play more directly, as does the determination of the appropriate roles of government, the private sector, and parents. How, then, have the United States and Germany dealt with this set of issues?
In February 1996, the Congress adopted the Communications Decency Act (CDA), a sweeping law that held content providers criminally liable if a person under 18 years of age obtained "obscene," "indecent," or "patently offensive" material through any "telecommunications device." There was a so-called "safe harbor" provision, which protected a provider who makes good-faith efforts to deny access to individuals under 18; such efforts would include the use of a credit card, a debit account, an adult access code, or an adult personal-identification number.12 The Act triggered immediate challenges and was quickly reviewed by the Supreme Court (Reno v. American Civil Liberties Union13 ).
The Court found (as had the lower courts) that the so-called Section 223 (47 USC 233) provisions of the CDA were too broad and too vaguely formulated. The vagueness of the expressions "indecent" and "patently offensive" allowed for such a wide range of interpretations that they could not be reconciled with the Court's strict criteria for allowing freedom of speech to be abridged. The chilling effect of the ambiguities in the law would lead producers to be so cautious that it would inhibit legitimate freedom of expression and restrict the availability of content that adults might quite legally want to obtain.
Even the safe-harbor clause was regarded as inadequate. It was not clear that the access control systems available would be judged sufficient to trigger the protections of the safe-harbor clause. And even if effective, installing the controls would entail substantial costs beyond the capacity of most noncommercial providers. Therefore the law would discriminate against them. Finally, much of the objectionable content came from abroad, where American law could not easily be enforced.
In response to the Court's action, Congress took a different approach, passing the Child Online Protection Act (COPA) at the end of 1998. COPA had a narrower scope of application than CDA, but its intention was similar and it has often been referred to as "CDA II." The intention of its sponsors was to deal with the Supreme Court's objections by dropping unacceptable terms like "obscene" and "indecent" and substituting a narrower "harmful to minors" standard. Furthermore, COPA dealt only with the commercial distribution of material and only on the World Wide Web. It did not try to regulate other Internet services such as newsgroups. COPA also included a safe-harbor provision that exempted from prosecution parties that take good-faith measures--through any reasonable means feasible under available technology (e.g., the use of a credit card)--to restrict access by minors to material that is harmful to them.
Still, many of the groups that objected to the CDA also found the new statute to be objectionable, and the American Civil Liberties Union (ACLU) and other groups challenged it in court. The United States District Court for the Eastern District of Pennsylvania issued a preliminary injunction against COPA, holding that the law was likely to be found incompatible with the First Amendment for many of the same reasons that the CDA had been rejected.14 Content providers would be inhibited, by fear of liability as well as by the costs associated with installing access-control software, in what they produced, with the net effect of adults being less able to receive legal material that they might want.
The District Court acknowledged that youth protection was a legitimate reason for restricting freedom of expression, but it argued not only that less restrictive means were available but that the prescribed access-control systems would be of limited effectiveness anyway; they would not apply to foreign Web sites, noncommercial providers, or newsgroups. Less restrictive means, such as filtering software (discussed in Chapter 3), might be simpler, cheaper, and no less effective. On April 2, 2000, the U.S. Justice Department appealed the District Court's decision, and on June 22, 2000, the Third Circuit Court of Appeals upheld the decision of the District Court. The U.S. Supreme Court has accepted a further appeal from the U.S. Government for its 2000-2001 term and is expected to hear this case in November 2001.
Another attempt to protect youth was enacted on December 21, 2000--the Children's Internet Protection Act (CIPA).15 CIPA requires public schools and public libraries that receive discounted service for Internet access through federal funds ("e-rate") to enforce a policy of Internet safety for minors. Such public institutions must use "technology protection measures" that prevent access to visual depictions that are obscene, "harmful to minors," or contain child pornography. CIPA further defines material that is "harmful to minors" as material that if "taken as a whole and with respect to minors, appeals to a prurient interest in nudity, sex, or excretion; depicts, describes or represents, in a patently offensive way with respect to what is suitable for minors, an actual or simulated normal or perverted sexual act, or a lewd exhibition of the genitals, and taken as a whole, lacks serious literary, artistic, political or scientific value to minors."16 The American Civil Liberties Union and the American Library Association have announced their intention to challenge CIPA on First Amendment grounds.17
In 1997, a comprehensive Internet-specific law for the protection of minors was adopted in Germany that prohibits young people from receiving certain kinds of material. As part of the law's implementation, a list was developed of materials that are inappropriate for minors and that may not be distributed by electronic means or, for that matter, made accessible in any other way. The list includes material that is "immoral, [has] a brutalizing effect, [gives] incentive to violence, crimes, or racial hatred . . . [or glorifies] the war"--categories that clearly go beyond the proposed laws in the United States.
Like the U.S. legislation, the German law places responsibility for limiting access primarily on the provider. The law also has a safe-harbor provision, insulating providers from prosecution if they have made a good-faith effort to prevent minors from accessing the inappropriate material. The law describes such an effort as making "technical provisions . . . to restrict the offer or its distribution within [Germany] to adult users." The kind and type of these "technical provisions" have not yet been specified.
Of course, either the provider or the user can do the actual restricting. The German law allows for user-initiated controls when it is the user who initiates access to the inappropriate material. Material is allowed to be distributed this way only "if devices are supplied by the provider or other(s) that allow the user to block these offers." This leaves to parents or guardians the decision of whether or not to use the blocking device. Again, in this case, the legislature didn't say anything about what kinds of blocking devices would be suitable--a not-unreasonable stance given the dynamic nature of the technology.
Although all the arguments used to challenge the CDA and the COPA as unconstitutional in the United States would apply to the German law, no significant objection to the law has been raised in Germany. It is another indication of the difference in attitude toward freedom of speech in the two countries, discussed at length in the previous section. But it also indicates the fact that, in Germany, the public is willing to give administrative authorities the latitude to administer a law that might threaten constitutional rights, on the assumption that those rights will be taken into account in actually applying the law's provisions. This difference in attitude toward government appears to be an important distinction between the two countries. The greater trust in government, evident in this as well as other examples, leads German society to look more to government itself, rather than to tightly drawn laws, to protect and balance rights and values.
The safe-harbor provisions in U.S. and German law introduce an incentive for the development of appropriate screening technologies, but it is not clear at this point either how effective these new technologies are likely to be or what new threats to constitutional rights they might introduce. For example, the U.S. courts have viewed filters as a reasonable approach to controlling inappropriate transmissions. But many in Europe (and the United States) worry that the use of these systems, even by private organizations, might amount to precensorship, particularly where the filtering is based on the identification of unacceptable sites rather than specific material.
In Europe, there is also the fear that political or religious zealots might wield control over the site-assessment procedure and that the systems might become oriented too much toward American moral concepts. Some have favored systems that control which users have access to the site of a particular content provider. Similar to age restrictions on the purchase of written material considered harmful to young people, or a visit to an establishment in a red-light district, access to the online offers would be denied to adolescents, but not to adults. Unfortunately, this "zoning approach" also requires content evaluation. Also, it is unclear whether digital age-verification systems or similar access controls can really work in the highly decentralized world of the Internet. Thus, both the technical uncertainties and the different political value judgments in the two soci-eties continue to present serious challenges, which are discussed in greater depth in the following section.
Even if government policymakers decide under what circumstances to restrict freedom of speech for the sake of a competing substantive value, many issues remain. What is the appropriate point of intervention? Is it the content provider, the recipient, or one of the intermediaries between them, such as the Internet service provider? Here the question is not merely one of where it is most practical to interdict inappropriate transmissions, but which party should be made responsible to act, although the two facets of the question may well be related. Furthermore, should the potentially harmful content be prohibited, or is it sufficient to make access more difficult or more costly? The latter approach is exemplified by the "watershed rules" that are typical for television broadcasting in many countries, which restrict the airing of "offensive" material to the late-evening hours. These questions take on great importance because even though an initial value balance may have been made in reaching the decision to restrict free speech, the way it is implemented could profoundly alter the balance.
Given some of the practical difficulties in holding providers or recipients responsible for restricting access to potentially harmful content, a number of efforts have been made to hold intermediaries responsible. Because intermediaries are relatively few in number--at least compared to the number of content providers or recipients--and they generally have a local presence in order to do business, they are an attractive target for regulation.
Yet regulation of intermediaries presents different kinds of difficulties. Access providers connect content producers and users to each other via the Internet. Once the connection to the network has been made, they have no influence either on what material moves through the wires or where it goes. They are much like the postal system in that they don't know the contents of the messages they deliver. In consequence, it is generally agreed in both the United States and Europe that the access provider should not be held responsible for the contents of messages. In Europe, this has been codified in the E-Commerce Directive of the European Union18 as well as the German Teleservices Act.19 In the United States, so-called "common carrier" provisions allow certain carriers of communications to carry all manner of traffic without liability (e.g., telephone service providers), and more recently, Congress granted limited immunity to access providers for violations of copyright law in the Digital Millennium Copyright Act.20
Host providers, however, are a different story. A host provider may be a portal or a proprietary service that gathers in one place a large amount of third-party content for user access. Being closer to a virtual forum site or bazaar than to a postal system, it provides Web space, helps its subscribers find material more easily, and establishes "bulletin boards" and e-mail services. Generally, the host provider does not have anything to do with the contents placed on the server, but a good deal to do with its organization in the "marketplace."
Because the host provider offers more than a connection service, the question of liability is more complicated. Legal systems have to determine when the value added by the host provider's services begins to make it look less like an access provider and more like a content provider. The task is made all the more difficult as the ground keeps shifting: new technologies create new business opportunities for inventive entrepreneurs, and the services offered by host providers change. It is unlikely that a simple or permanent resolution to this question, or that the resolution of differences in this area between the United States and Europe, is in the offing.
Access providers and host providers are not the only important Internet intermediaries. Search-engine operators, mirror sites, and local hosts all play a role in connecting producers and users. Technological changes that begin to blur the distinction between broadcast and network media will add to this group--and to the problems of sorting out liability. Each will add to the challenge of harmonizing U.S. and European approaches to these issues, which are already difficult, as the discussion below demonstrates.
Although the provisions of 47 USC 223 of the CDA, described earlier, quite clearly made providers liable if inappropriate material got into the hands of minors through the Internet, 47 USC 230 of the Act declared that third parties--that is, intermediaries--were not responsible for material they transmitted and were not liable for refusing to transmit material they viewed as "questionable."21 Congress's position was probably influenced to some degree by contradictory court decisions that had been handed down on the question of host-provider liability.22 However, the Act's language suggests that Congress was also guided by the belief that interactive computer services should be given strong protection because they "provide a forum for the real variety of political discussion, unique possibilities for cultural development, and a multitude of ways in which intellectual activity can develop." These services had already developed, helping the United States to establish its leadership in the networked world, and the Act's preamble stated that it is the "policy of the United States to keep the lively and competition-shaped open market that now exists for the Internet and other interactive computer services as free as possible from federal or state regulations."
In the court tests thus far, 47 USC 230 has fared very well, with intermediaries held harmless from liability whether or not they have known what they were transmitting, known that it was illegal, or even if they paid the provider for it.23 The Supreme Court has not yet handed down any rulings on this section, but all indications are that the strong commitment to freedom of expression in the United States will continue to result in support for 47 USC 230. Furthermore, the wording of the Act and the actions of the lower courts are consistent with the American belief that self-regulation is preferable to governmental controls.
The laws of the Federal Republic place much greater responsibility on host providers, although they do not regulate other intermediaries such as search-engine operators or providers of hyperlinks. In Germany, host providers are "responsible for foreign contents that they provide for use if they had knowledge of these contents and it is technically possible, and also reasonable, to prevent their use." This is called "notice liability"; that is, if one knows about the material, one is liable if no action is taken to remove it. Furthermore, under German law, a provider cannot defend itself by arguing that it didn't consider the questionable contents to be illegal. Article 14 of the EU Commission's Directive on Electronic Commerce takes the same approach.
There have been no explicit constitutional objections to this law raised in Germany. It obviously goes in a very different direction from U.S. law. However, many argue that the host provider's liability is actually more limited than it may appear because the provider need only act if it is "technically possible . . . and . . . reasonable" to prevent the distribution of the objectionable material. This allows for some judgment and balancing by the prosecutors and courts in deciding, for example, whether a small provider could "reasonably" be expected to install blocking software so expensive that it might put the company out of business. Moreover, the law does not require that the host provider make an active effort to root out illegal material.
With these factors softening the impact of the liability provisions, there appears to be a broad consensus throughout Europe that the German law and the E-Commerce Directive of the EU Commission represent an appropriate middle path. In the view of most Europeans, these regulations balance the protection of minors with the right to freedom of expression and the economic interests of host providers.
With the laws in the United States and Germany as different as they are in this case, and with the strong consensus and deep, principled conviction that exists in each country for its own law, it is difficult to see how a practical compromise can be achieved and easy to see how the differences will inevitably lead to conflicts. The Bavaria v. CompuServe case, mentioned earlier in Chapter 3, certainly demonstrates the problem.
American criticism of the German action in the CompuServe case was based on the strong objection in the United States to any action that would (1) have a chilling effect on freedom of speech and (2) unreasonably or unnecessarily burden a private company with economically debilitating regulations. Germans, for their part, are generally much less concerned than Americans that government regulations might burden industry, if those regulations appear otherwise warranted. Furthermore, most Germans would attach more importance to the protection of minors than to the protection of free speech and would have no compunction about forever blocking a transgressing newsgroup--or even 282 of them--if it were necessary to prevent the distribution of child pornography.
But another source of the tension that arose in this case was the frustration of the German prosecutors, who had very little leverage to take action against CompuServe USA. Because the company is headquartered in the United States and its executives live there, German law could not reach them. The United States would not cooperate in extradition proceedings because the company's actions were not violations of U.S. law.
The Munich prosecutor, anxious to enforce the German law on child pornography, instead charged the executive director of CompuServe Germany, the local affiliate, with violation of the law. The problem, of course, was that the local affiliate had no way of blocking the offending newsgroups. Thus the prosecutor's actions were criticized in Germany as well as in the United States; but the German criticism arose not because of any objection to host-provider liability but because the person charged was not the person responsible. In fact, though the executive director was initially found guilty, the conviction was overturned in November 1999 precisely because the court recognized that he was neither responsible for sponsoring the newsgroups nor able to remove them from the network.
The difficulties in regulating Internet content epitomize the challenges that global networks present for governance. It therefore does not come as a surprise that almost all the elements discussed in Chapter 9 (on governance) in abstracto have a bearing on content regulation.
It is useful to keep in mind that the Internet contributes to globalization in two ways. First, it is a global entity that brings together cultural and political influences from many countries and gives rise to a burgeoning new field of commerce. Second, the Internet makes it possible for established businesses to coordinate activities across the globe through various commercial arrangements, freeing them to a certain extent from the constraints of geography and national boundaries.
Globalized business activities are much more difficult for governments to regulate and control, both because they may not be physically located within a country's boundaries and because nations compete to attract businesses.24 This reduces the feasibility of strong, unilateral command-and-control as well as the reach of penal law. The change is one of degree, and national governments certainly do not lose all their options.25 For example, a person residing in a country can be held liable for violation of its national law or regulation even if he or she is part of an international business or the illegal action involves transmission of inappropriate material from another country.
Similarly, a nation could enforce its laws extraterritorially by attaching a foreign company's assets that happened to be located within its boundaries or even arresting a visiting company official.26 Under German law, prosecutors not only would be allowed to take these actions, but are actually required to do so. With respect to Internet sites, some have suggested that nation-states could actually go further. They might attack foreign Web sites that contravene their laws, using such technical means as denial-of-service attacks similar to those mounted by hackers against Yahoo! and amazon.com.27 There seems little question that such tactics would violate public international law28 but, perhaps more to the point, they illustrate how the initial value balance involved in a decision to restrict transmission of certain content can be distorted by the means employed to implement the decision.
The ideal situation, of course, would be one in which national laws pertaining to the Internet and other global activities were harmonized. That does not seem to be a realistic expectation for the foreseeable future, however. So the most reasonable hope is for cooperation among governments to help providers and hosts understand the laws and regulations in each jurisdiction. Over time, this kind of transparency might lead toward creative harmonization and compromise.
The practical question is how far one nation can go in imposing laws and regulations in a global economy in which firms have the ability to withdraw their activities from the nation's territory. Some observers believe that this threat is overstated--that firms are unlikely to abandon a large national market that would be difficult to maintain without some presence in the country. There may also be other reasons for keeping a presence in a country, including the preference of investors or the availability of research-and-development capacity. However, although these considerations may make it impractical for a firm to avoid a nation's laws on illegal Web content or its intellectual-property regulations, it is certainly possible for the firm to move large parts of its operation offshore, to the detriment of the nation's economy.
International treaties provide one way of creating global order in a world where there is no supranational government. They work reasonably well when there is a common view on the values to be protected, general agreement about what needs to be done, and an obvious advantage in dealing with the issues on a global basis. A number of treaties are in existence today that appear, at least nominally, to deal with matters closely related to some of the content issues that have arisen with regard to the Internet.
For example, the Convention on the Prevention and Punishment of Genocide, dating from 1948,29 requires the parties to make criminal the "direct and public incitement to commit genocide." The 1966 International Convention on the Avoidance of All Forms of Racial Discrimination30 proscribes words and acts of racial discrimination. The United Nations' Human Rights Pact of the same year31 not only deals with human rights, but also bans war propaganda and "every encouragement of nationalistic, racial, or religious hatred [that] incites discrimination, animosity, or violence." In addition, there is a UN International Convention on the International Right of Correction from the year 195332 (although neither the Federal Republic nor the United States has adopted it).
One promising approach to internationalizing some aspects of Internet regulation would be to extend existing treaties to the new context. That would require a willingness on the part of each signatory country to interpret or extrapolate the treaty's provisions to the new environment of the Internet and to amend its own national laws to reflect the new interpretations. Thus far, that has not happened.
For these and other reasons, there continues to be a push for new treaties to achieve international harmonization. Of course, they are easier to negotiate when nations largely agree on the issues. That requires either finding issues on which there is essential unanimity to begin with or defining a set of countries or a region with largely shared values. What should be evident from the discussion in this chapter is that the value agreement must pertain not only to the problem giving rise to the challenge but also to the appropriateness of government roles and regulatory tools for implementing a solution.
At the moment, the one area in which it appears likely that some international harmonization will be achieved, at least in Europe, is the regulation of child pornography. In June 2001, the European Committee on Crime Problems (CDPC) of the Council of Europe approved the Draft Convention on Cybercrime, which was submitted to the full Committee of Ministers for adoption in September 2001. Article 9 of the Draft Convention commits signatories "to adopt such legislative and other measures as may be necessary to establish as criminal offenses under its domestic law, when committed intentionally and without right," acts that relate to child pornography.33 In addition, a supplement to the Europol agreement is being prepared that gives the European police authorities wider jurisdiction to deal with the production, sale, and distribution of child pornography.
However, the inclusion of content-related offenses other than those related to child pornography (e.g., the "distribution of racist propaganda through computer systems") proved too controversial to include in the Draft Convention. The European Committee on Crime Problems may consider an additional protocol relating to these offenses, but it faces opposition from a number of civil liberties organizations.34
The problem with harmonization is that if consensus requires drawing a too-small circle of cooperating nations, violators can find a regulatory haven fairly easily in a nation-state not party to the convention. There are, of course, political and economic pressures that can be brought to bear on nonsignatory states to bring them into compliance. And for that matter there are carrots as well as sticks, as has been shown in certain aspects of global environmental protection.35
There are dangers in this approach, however, where global networks are concerned. The uneven penetration of the Internet (and its benefits) has already created a global sense of "haves" and "have-nots" that might well be exacerbated by unidirectional pressure from the United States or Europe on other nations, regardless of the merit of their position. Beyond that, there is the danger that harmonizing with a particular set of values, or adopting a universal approach to the structure of legal institutions, will reduce the very diversity that the Internet has the useful potential to promote.
As pointed out elsewhere in this report, there are a number of circumstances in which commercial law--rules that have been developed for resolving business conflicts by coordinating the laws of different nations--could be used to deal with harmful contents accessible through the Internet. Consumer fraud, for example, does not change its legal character just because it is carried out with the aid of a Web page.
Nevertheless, commercial law is a weak foundation for matters such as child pornography and politically tainted hate speech. The major problem in such cases is that the potential harm is to people who are not likely to bring a private legal action for redress, may well not have standing to sue, and might have a difficult time proving damage. Who would sue and how would the case be made if easy access to child pornography increased the risk that more children might be abused? Who would sue and what would be the proof if easy access to Nazi propaganda increased the risk that extreme right-wing political forces might gain on the next Election Day? Even if the law gave standing to the public at large, would enough people have the incentive and the wherewithal to bring such actions?
A number of groups, certainly among them the Netizen and e-commerce communities, argue that in most instances the best approach to controlling the diffusion of offensive Internet-based material is self-regulation. The great attraction of this approach is the flexibility it provides; individuals can make their own judgments about what material they want to avoid (or to access), and the need to force value consensus within a particular country or across the globe is removed. When one nation's nudity is another's pornography, broad consensus is next to impossible. On the other hand, access-control systems, age-verification systems, and various kinds of filtering software can facilitate customized nonstate regulation.
To understand filtering systems, it is important to distinguish between a site's content and the judgment one makes about it. For example, though a site might have an image of a naked woman or a swastika, there may be many judgments about whether or not such content is offensive--one person might think so; another might not.
Many filtering systems are designed by vendors who act both as labeler and judge--they describe the content and also make a judgment about appropriateness (though they may or may not provide the user with an option to override their judgment). A second approach is to separate the functions of labeler and judge. To facilitate content labeling, the World Wide Web Consortium designed the Platform for Internet Content Selection (Box 5.2), which provides a standardized vocabulary and format for labeling content. Once labels have been associated with specific content, the user can deploy a filter that examines the labels associated with incoming content, and based on those labels, makes judgments about whether content with certain labels should or should not be displayed.
Note that that different filters can behave differently with regard to the same content. That is, Filter A may allow content that is labeled as containing "nudity" and reject content that is labeled as containing "swastikas," while Filter B may do exactly the opposite.
A second issue is that the scope and granularity of the labeling are critical. If the labeling vocabulary does not include a category for "swastikas," a filter based on this approach cannot block content containing swastikas. At least one particular vocabulary--of the Internet Content Rating Association--allows labeling of sites that contain certain kinds of language, nudity or sexual content, violence, and information related to gambling, drugs, and alcohol. However, there is no reason in principle that a party concerned about other categories of possible offensiveness cannot create vocabularies that cover them (though in practice, obtaining a broad scope of coverage for such alternatives is difficult).
Though filtering systems can be created by anyone, the required effort may be large. In principle, the organizations responsible for filtering systems must stand behind the judgments they make about offensiveness (and perhaps about content labeling as well), and users of filtering systems may make their own judgments about the attractiveness of products from different vendors based on how well their own values about offensiveness are reflected in the vendors' judgments. Thus, users not wishing to see pro-racist material might use filters developed by civil-rights organizations, or users not wishing to see anti-religious material might use filters developed by their church.36
One of the attractive features of a labeling system is that it is inherently self-policing. The value of the label depends on the reputation it develops for reliability. Each site that receives the label's endorsement has a stake in giving it meaning. The user community itself has an interest in the quality of the label and can also be part of the enforcement process.
As movie- and video-rating organizations in the United States have learned, making judgments about offensiveness is fraught with difficulties. Such groups must tread a fine line between being overly rigid and prescriptive in their classifications and being so ambiguous that no real information is conveyed to the user. Generally speaking, categories or rules that have some flexibility are more likely to be suitable for a rapidly changing world like the Internet.
An important technical issue is the extent to which computer-executable rules for distinguishing between appropriate and inappropriate content can be formulated. Some of the filtering software with which people have experimented thus far has shown how difficult this can be, sometimes leading to absurd results, as when some particular words are coded as unacceptable. Moreover, filtering systems are usually designed with some particular point of view to take advantage of a market, pursue an ideological agenda, or avoid liability on the part of the software provider. This means that, at least until now, there has been little incentive for transparency in how the filters are created37 and little attempt to take opposing interests or values into account, as one might hope would be the case in a legislative approach to regulation.38 In that sense, filter systems can work against certain free speech values of a community and, indeed, help to de-integrate the community.
Host providers have a different problem in undertaking self-regulation; the control systems available to content providers or content users are not applicable to them. First, the material that host providers carry is an aggregate from a huge spectrum of content providers; and second, they are not end users, so that filtration software would be inappropriate. Many host providers have adopted their own codes of ethics. They may commit themselves, for example, to checking complaints about sites that come from users or to working cooperatively with legal authorities of particular nation-states to take action against sites involved in illegal activity.
Critics of self-regulation point out that because such codes of ethics are unenforceable, they are primarily symbolic. However, it may be possible to develop a legal framework that would make codes enforceable, even if the host providers themselves determined the details of the code. A more serious criticism is the possible curtailment of free speech; the codes may deprive content providers who are sanctioned or excluded by a host provider of the due process they would have under a more formal legal structure. Such points have not been thoroughly discussed at this early stage in the development of these self-regulatory instruments.
The role of hosts as intermediary between user and content provider suggests that it may be inappropriate to think of them as engaged in regulation per se. Their role in a nongovernmental regulation scheme is to provide a service to users who would like to be shielded from harmful or otherwise unwanted contents. Users could do this for themselves by simply not accessing certain sites or by installing filters on their computers (or using other technologies that may be available in the future), or they could access the Internet via a service provider with a declared access policy. Whether users want to pay for the host's service is something to be determined by the market. In fact, it would appear that, in the future, host providers will compete with each other and with companies producing self-help tools like filters, and users may choose on the basis of convenience, comprehensiveness, and selectivity.
Self-regulation and intermediation have many attractive features, but if governments do not intervene, the market alone will shape the array of mechanisms actually used to control the distribution of harmful content. These mechanisms, in turn, will largely determine what material is electronically available to whom. Obviously, the outcome may not always conform to the values of the society. It might therefore be useful to consider hybrid forms of regulation, combining public and private controls.
Governments can use both sticks and carrots to influence the operation of self-regulatory schemes.39 As pointed out earlier, command-and-control regulation of content providers doesn't work very well in the networked world. The CompuServe case indicates that an alternative for governments is to threaten action against host providers. But there are softer options.
Governments can insist on an organizational framework for self-regulation that gives outside interests a voice and ensures that the process of developing and applying a rating system or excluding a provider from a host network is transparent. They can give industry limited antitrust or liability protection to encourage joint rulemaking and vigorous joint action. Or they can set up an authority to check on how well self-regulation is working (a role played by the U.S. Federal Trade Commission with respect to certain privacy issues and other aspects of consumer protection). It is even possible to envision governments supporting or encouraging education and training programs to improve the media competence of users so that they are better able to use the self-help tools that become more and more available as technological advances occur.
It does seem likely that a hybrid regulatory approach will finally emerge, but it is difficult to predict what particular balance of mechanisms will actually obtain in each country. The experimentation now going on appears to be healthy, and if there is a bottleneck, it is the legal system's difficulty in understanding the technical possibilities and reacting quickly and flexibly to them. It may well be that in an area as technologically dynamic as this one and as capable of bringing about major social changes, expert panels similar to those developed under the aegis of the Intergovernmental Panel on Climate Change could play an important role. They might be especially useful in advising governments on the state of the technology and the feasibility of various regulatory approaches.
1 Michael Thompson, Richard Ellis, and Aaron Wildavsky, 1990, Cultural Theory, Boulder, Colo.: Westview Press; for an application to the topic of this report see Michael Thompson, 2000, "Global Networks and Local Cultures: What Are the Mismatches and What Can be Done About Them?," in Christoph Engel and Kenneth H. Keller, eds., Understanding the Impact of Global Networks on Local Social, Political and Cultural Values, Baden-Baden: Nomos 113-130.
2 Computer Science and Telecommunications Board, National Research Council. 1994. Rights and Responsibilities of Participants in Networked Communities. Washington, D.C.: National Academy Press.
3 The International Covenant on Civil and Political Rights was adopted and opened for signature, ratification, and accession through U.N. General Assembly resolution 2200A (XXI) of 16 December 1966. It entered into force on 23 March 1976.
4 343 U.S. 250 (1952).
5 578 F.2d 1197 (7th Cir. 1978); cert. denied 439 U.S. 916 (1978).
6 505 U.S. 377 (1992).
7 244 F.3d 1007 (9th Cir. 2001); reh. en banc granted, 268 F.3d 908 (9th Cir, October 3, 2001). The latter citation refers to an order from the court that the case be reheard by the en banc court, with the three-judge panel opinion not being cited as precedent by or to this court or any district court of the Ninth Circuit, except to the extent adopted by the en banc court.
8 BVerfGE 7, 198, 209 f.
9 BVerfGE 90, 241, 252.
10 In Germany it is even at the constitutional level; see Art. 6 Abs. 2 GG or Art. 5 Abs. 2 GG. But in the United States as well, the Supreme Court found, in the decision of Ginsberg v. New York (390 U.S. 629 (1968)), that the state had a legitimate interest in protecting the physical and psychological well-being of minors.
11 In the United States, the Child Pornography Protection Act of 1996 (CPPA) expanded the definition of child pornography to include any visual depictions of individuals that appear to be minors, or visual depictions presented in a manner to convey the impression of a minor, engaging in sexually explicit conduct. (As of this writing [November 2001], this provision of the CPPA is pending before the Supreme Court. It was held unconstitutional by the U.S. Court of Appeals for the Ninth Circuit (Free Speech Coalition v. Reno, 222 F.3d 1113 (9th Cir. 2001)), but was upheld by the First, Fourth, Fifth, and Eleventh Circuits (United States v. Fox, 248 F.3d 394 (5th Cir. 2001)); United States v. Mento, 231 F.3d 912, (4th Cir. 2000); United States v. Acheson, 195 F. 3d 645 (11th Cir. 1999); United States v. Hilton, 167 F.3d 61 (1st Cir. 1999), cert. denied, 528 U.S. 844, 120 S. Ct. 115, 145 L. Ed. 2d 98 (2000)). Under the U.S. criminal code, possession, distribution, and transportation of child pornography so defined is a felony. In Germany, Section 184 of the German Criminal Code prohibits the distribution of both "real" and "fictive" child pornography (real with real persons involved; fictive with drawings, computer-produced images, and even written or acoustic material). However, the German Criminal Code does not prohibit the possession of fictive child pornography.
12 In order to be able to consider technological innovations in this area without a statutory change, every method that is feasible will be treated in the same way. The Federal Communications Commission would have had the task of choosing suitable systems and to qualify them as such. The safe harbor clause has as its aim--similar to the age restriction on youth-endangering publications or visits to establishments in red-light districts--the denial of access to online offers to adolescents only, and not to adults. The complete criminalization of the contents is not intended with this so-called Zoning Approach.
13 521 U.S. 844 (1997).
14 American Civil Liberties Union v. Reno, 31 F. Supp.2d 473 (E.D.Pa. (1999). This decision can be seen online at <http://www.cdt.org/speech/copa/990201ACLUvsRENOdecision.shtml>.
15 P.L. 106-554, § 1(a)(4), 114 Stat. 2763.
16 47 U.S.C. § 254(h)(7)(G).
17 Multnomah County Library v. United States; No. 01-CV-1322; <www.aclu.org/features/fo02001a.html> (site visited April 26, 2001).
18 Directive 2000/31/EC of the European Parliament and of the Council on certain legal aspects of information-society services, in particular electronic commerce, in the internal market (Directive on Electronic Commerce), Official Journal of the European Communities L 178/1 (July 17, 2000); available online at <http://europa.eu.int/ISPO/ecommerce/legal/documents/2000_31ec/2000_31ec_en.pdf>.
19 The German Teleservices Act is part of the Information and Communication Services Act of July 7th 1997, BGBl. I, S. 1870; available online at <http://www.iid.de/iukdg/gesetz/engindex.html>.
20 17 U.S.C § 512 (limiting liability for persons who transmit, route, provide connections, or provide intermediate and transient storage of material infringing copyright).
21 Through these provisions, referred to as "Good Samaritan" clauses, it should be made clear that no provider is liable because it, in good faith, attempts by the use of computer programs to remove questionable contents from its servers, that is, to block access to these contents.
22 Cubby v. CompuServe, 776 F. Supp. 135 (S.D.N.Y. 1991); Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).
23 A description of cases involving 47 U.S.C § 230 can be found online at <http://www.techlawjournal.com/courts/zeran/47usc230.htm#cases>.
24 On the governance of the Internet in greater detail, see Christoph Engel, 2000, "The Internet and the Nation State," in Christoph Engel and Kenneth H. Keller, eds., Understanding the Impact of Global Networks on Local Social, Political and Cultural Values (Law and Economics of International Telecommunications 42), Baden-Baden: Nomos, 201-260.
25 This point has been stressed repeatedly by Jack Goldsmith. In the context of this report see in particular Jack Goldsmith, 2000, "The Internet, Conflicts of Regulation, and International Harmonization," in Christoph Engel and Kenneth H. Keller, eds., 2000, Governance of Global Networks in the Light of Differing Local Values, Nomos: Baden-Baden, 197-207.
26 For greater detail, see Werner Meng, 1994, Extraterritoriale Jurisdiktion im öffentlichen Wirtschaftsrecht, Berlin.
27 Cable News Network, "Cyber-attacks Batter Web Heavyweights," February 9, 2000. See <http://www6.cnn.com/2000/TECH/computing/02/09/cyber.attacks.01/index.html>.
28 Cf., Jamie Frederic, 1997, "Rwandan Genocide and the International Law of Radio Jamming," American Journal of International Law 91:628.
29 Convention of 09.12.1948, BGBl. 1954 II. 729.
30 Convention of 07.04.1966, BGBl. 1969 II 961; compare also BTDrs. 13/1883.
31 International Pact on Civil and Political Rights of 09.12.1966, BGBl. 1973 II 1533.
32 Convention on the International Right of Correction from 31.03.1953, UNTS 435, 192. The "right of correction" refers to the right of a nation "directly affected" by a private or public report that it considers "false or distorted" to secure "commensurate publicity" for the "corrections" that the nation wishes to publicize.
33 These acts include producing child pornography for the purpose of its distribution through a computer system; offering or making available child pornography through a computer system; distributing or transmitting child pornography through a computer system; procuring child pornography through a computer system for oneself or for another; and possessing child pornography in a computer system or on a computer-data storage medium. See <http://conventions.coe.int/treaty/EN/projets/cybercrime27.htm>.
34 See <http://www.privacyinternational.org/issues/cybercrime/coe/ngo_letter_601.htm>.
35 See Rüdiger Wolfrum, ed., 1996, Enforcing Environmental Standards. Economic Mechanisms as Viable Means, Berlin.
36 A fuller discussion of the advantages, disadvantages, and other realities of filters is contained in CSTB, National Research Council, Youth, Pornography, and the Internet: Can We Provide Sound Choices in a Safe Environment?, Washington, D.C.: National Academy Press, forthcoming.
37 This is not to say that it is impossible or even difficult to increase transparency of filters by making available the lists of Web sites that are blocked or the lists of keywords that might be objectionable. However, vendors of filter products often argue that the creation of their blocked lists or "bad words" is their intellectual property, and that publication of such lists would deprive them of the benefits of their work if others took their work as a starting point to develop other lists.
38 This has been recently pointed out by Lawrence Lessig, 1999, Code and Other Laws in Cyberspace, New York: Basic Books.
39 For the theoretical framework, see Fritz W. Scharpf, 1997, Games Real Actors Play: Actor-Centered Institutionalism in Policy Research, Boulder, Colo.: Westview Press.