|Global Networks and Local Values|
source ref: ebookgln.html
This chapter examines the evolution of global networks, mainly the Internet, and seeks to relate general features of the architecture and design of these communications systems to the values inherent in or reinforced by the technology. The focus is not on specific values such as privacy, intellectual property rights, or free speech, some of which are analyzed in other chapters of this volume, but rather on two more general phenomena: how values and interests have shaped and become embedded in the specifications of technological systems, and how the technical features of such systems in turn affect the values of the communities that make use of them.
The Internet emerged as a mega-network, so to speak, of technically and socially heterogeneous electronic communications networks. There were no formal obligations imposed on participants to converge on any uniform set of technical practices or social values in developing or using the Internet. At the same time, the benefits of ever-wider connectivity achievable by the system provided strong incentives for diverse public and private entities to ensure that hardware, software, and organizational structures were compatible and complementary. Thus far, this process of de facto standardization has been limited to technical specifications for interoperability and to basic social norms and business practices that are recognized as being conducive to the further growth of "inter-networking."
The way that the Internet has evolved historically from its origins in ARPANET, and the development of the World Wide Web (WWW) from its beginnings at the European Center for Nuclear Research (CERN) in Geneva, underscore a paradox. Although the "network of networks" has provided a culturally and politically heterogeneous array of societies with global connectivity, the key technologies of the system were originally designed to suit the needs of publicly funded scientific research groups. While widely distributed geographically and situated in a variety of academic and quasi-academic institutions, these groups were nonetheless very homogeneous with regard to the values shared among their respective work cultures.
Moreover, the scientific work groups within which the Internet's core technology was formed made little provision for coping technically with such issues as content, privacy, security, and identity.1 The relative emphases reflected a focus on resource sharing, communication, and collaboration among the original communities of designers and users, whose members were scientists and engineers selected according to criteria of technical competence and, in some cases, of national security.
It should not be surprising, therefore, that some palpable sociocultural frictions emerged as the network of networks became a universal communications facility--the "global information superhighway," in 1990s argot. The difficulties that have arisen over the control of content (based on "acceptability" concerns) on the Internet are, in some sense, a reaction to the technical and, increasingly, economic ease with which the Internet's reach can be extended into many diverse cultural settings. The set of electronic communications systems that evolved into the Internet carried with it technological design features that were in some respects quite unlike those of the existing telecommunications networks: more content can be discovered and pulled in from more sources, and more can be sent, relatively easily and inexpensively.
One consequence of these features was that they enabled the rapid spread of digital communications channels that simply bypassed established licensing procedures and other kinds of authorization that are evident--for example, in broadcasting. Radio and television have afforded local or national political jurisdictions the opportunity to pre-assign responsibilities for, and place a variety of restrictions on, the content and conditions of programs' delivery.
A second noteworthy dimension of the "value" conflicts that have emerged with the Internet's explosive growth stemmed from the formation, among some pioneer users of these internetworked facilities, of a new and distinctive cultural ethos of "cyberspace." This culture drew strength from the fusion of network engineers' and software programmers' enthusiasms for experimentation in this new technological domain, and it evinced an occasional anti-authoritarianism that took many faces--for example, frustration with the controlled telecommunications context in which the Internet technology arose, the extracurricular development of UNIX (a programming language fundamental to the early Internet) within AT&T, and the creativity of the "computer hacker" communities that only later became associated with destructive intent.
Popular perceptions tend to inflate the role of the Internet as the most recent among the "technologies of freedom,"2 and they tend to intensify representations of any governmental posture other than laissez-faire as contests between a reactive authoritarian state and those who adhere to the libertarian, democratic ethos of the Internet. But although anti-authoritarianism is now part of the popular culture or ideology often associated with the Internet and its early developers and users, this aspect--given the role of large institutions in guiding and funding much of the fundamental Internet development work--should not be overstated.
Nevertheless, anti-authoritarianism was one major reason why the evolution of the Internet led to significant technical departures--with respect to network architecture, cost structures, the services it carried, and the innovative "business models" that have co-evolved with it--from previous telecommunications systems.3 As with other technological developments, social and organizational goals affected the design and evolution of the Internet, first becoming "embedded" in specific network implementations and later manifesting themselves in protocols, technical standards, and operating procedures. This ensemble of characteristics, in turn, shaped the social conventions and behavioral norms that developed among users of the technology. Much may be learned from this history, which is presented in more detail in the next section.
A third major issue that needs to be addressed is the extent to which the ubiquity of the Internet's technology-based infrastructure will promote convergence in the values of the disparate user communities around the world. This is a complex matter, raising questions about whether the changing purposes for which the network of networks is being used will drive alterations in its architecture and technical features, as well as questions about the extent to which those changes can accommodate local pressures to affect network configuration or to control content through local regulatory interventions. This issue is discussed below and in Chapter 3. Without drawing premature conclusions at this point, it appears unlikely that a single, globally uniform value system will emerge. Still, one should be cautious about attempting to "predict the unpredictable,"4 given the uncertainties that surround, and are in turn created by, the continuing rapid pace of advance in digital network technology.
Globally pervasive telecommunications networks, even those in the prosaic form of the public telephone and telex, are a comparatively new phenomenon. In the last decades of the 19th century telegraphy was able to acquire something approaching global reach by means of submarine cables, but the number of nodes of the "Victorian Internet" remained limited to the industrially advanced countries and their colonial possessions.5 Even in those regions, telegraphic access tended to be restricted to the major commercial centers that lay along the seaboards or that were linked by railways. Indeed, it was not until the 1960s and 1970s that mature telecommunications systems, which had evolved as mono-functional (single-service) networks for the transmission of voice and text, achieved high penetration rates throughout industrialized countries and something approaching truly global coverage--albeit with an emphasis on major population centers.
Thus global "coverage" is not the same as a ubiquitous presence or even universal access to basic telecommunications. The great majority of the world's people still live under conditions that do not afford them basic local telephonic services, let alone global connectivity (which still relies to a considerable degree on the telephone network).
Progress, of course, has involved changes in the mix of technologies. While the telephone network continues to grow, the number of telex users has fallen as that technology has been replaced by facsimile (fax), which grew explosively during the 1980s. Around the world, all these networks have been affected by the operation and ownership largely of entities--public administrations (postal, telephony, and telegraphy authorities, or PTTs) or private regulated monopolies--that provided the services under common-carrier and universal-service regimes.
The rise of the Internet has been associated with privatization of telecommunications and relaxation of PTT control in many countries (e.g., with the introduction of competitive service from nontraditional players such as Internet service providers, or ISPs). This pattern reflects movement away from the inherently hierarchical architecture and control model of telephony and telex (both within countries and historically in the international, interconnected network environment), the rise of direct country-to-country dialing and other modern features associated with the new global network environment, and declining relative telecommunications prices (which still tend to be higher than in the United States and less likely to be flat-rate than to vary with time on the line and distance), itself a goal.
The design of the telephone system and the prevailing business models, as well as cultural and economic factors, powerfully shape the way we use the telephone, which is nowhere more obvious than in comparing user behavior across nations. For example, flat-rate local charges for business and residential customers may encourage frequent and long telephone calls (though charges do not fully explain the extent to which, in Western societies, the teenage children of middle-class householders engage in interminable after-school telephone conversations). Other evident changes beginning in the late 1990s relate to the spread of mobile (cellular) phones, and observers speculate about the cultural impact of anywhere/anytime calling behavior.
But although private communications practices, and the organization of commerce and industry, have been transformed in many respects by the diffusion of the telephone, we have no evidence that local and national cultural values have been significantly altered by the telephone's advent. Still less is there evidence to support the claim that the spread of ubiquitous telephone access, in and of itself, has been a strong force promoting convergence of social or business norms toward uniform regional and national, let alone global, value systems.6
Networks for data communication have a more uneven history than those for voice, although they have largely been built on the same underlying infrastructure (e.g., through the use of lines provisioned by telephone companies). Early data communications networks emerged in the 1960s and 1970s along with time-sharing computer systems. The major computer vendors developed software that supported interconnection of their machines, and consequently the early data communications networks were proprietary rather than public. Most of them were corporate networks linking different sites of a firm via leased telephone lines. Pioneer users were large corporations in the electronics and automobile industries and firms in the financial services sector; the multinationals took the lead in creating private global networks, primarily to facilitate intra-organizational data exchange.
Beginning in the 1970s, this same approach was expanded by intermediaries, third-party providers of so-called value-added networks (VANs, such as Telenet and Tymnet in the United States) to serve companies that could not afford the cost or inconvenience of developing their own networks. VANs grew in the 1980s by expanding points of presence in countries around the world, providing them with dialup and leased-line connections. Also in the 1980s, some of these companies began to offer service (e.g., GE Information Services' GE*nie) to the general public at comparatively low rates for non-business-hour use. They competed with other businesses having a "bulletin board" style as well as time-sharing roots, such as CompuServe (which introduced consumer service in 1979).
This was also the period during which VANs experimented with third-party interconnection of different businesses through the structured, controlled technologies associated with electronic data interchange (EDI). These EDI services supported the exchange of documents in standard formats through central host-computer systems; all parties to a transaction (e.g., buyer and seller) needed to be subscribers to the VAN's EDI service. VANs were treated as enhanced services in the U.S. regulatory context, making them exempt from telecommunications regulation.7
State-owned and regulated private monopoly-telephone-network operators around the world moved slowly to enter the growing markets for data services. A combination of regulatory restrictions and technical and managerial incompetence (reflecting, in part, emphasis on voice and lack of experience in data communications) appear to have constrained these organizations. Initially some of the PTTs in Europe, with the German Bundespost at the forefront, developed circuit-switched data networks that mimicked the model of the telex network, thereby displaying the extent to which they remained "locked into" the traditional commercial vision associated with the architecture of the basic telephone system.
Only in the early 1980s did the already-existing telephone organizations move to packet-switching technologies. PTTs tended to select the non-proprietary protocol standard, X.25, which was associated with the International Organization for Standardization's (ISO) Open Systems Interconnection (OSI) reference model; X.25 was approved in 1984 by the International Telecommunication Union (ITU, through what is known as the ITU-T, the telecommunications standards-setting arm).8 As noted above, most of the private corporate networks at this time relied on proprietary protocols, such as the IBM Corporation's Systems Network Architecture (SNA) or the Digital Equipment Corporation's DECNET standards. The packet-switching mode in the proprietary as well as in the X.25 networks relied on a hierarchical network architecture, which conformed to traditional management notions of the manner in which information should optimally flow and be controlled within the large corporate organization.
This, as shall be seen, was very different from the architecture of the Internet. Outside the United States, this public-utility approach to packet switching was associated with country-specific public data networks (PDNs), some of which involved country-to-country gateways (based on the sister protocol standard, X.75, also used for connections to VANs). Similar to telephony charging, PDN and VAN pricing tended to involve connection-time and traffic (e.g., "kilo-packet") charges.
Early commercial data-communications networks and services--corporate and VAN plus PDN--provided crucial experience that shaped the development of the global telecommunications system and created readiness for the Internet takeoff in the mid-1990s.9 The context was confused, however. It was an intersection, and occasionally a collision, of the business and engineering orientations that were traditional in the world of the PTTs, and the world of computer vendors and data-processing and data-communications services. Stalemates and acrimony sometimes marked proceedings of technical committees in national and international telecommunications standards organizations during the 1980s.10
This discord gave way, by the late 1990s, to broad agreement on many basic principles concerning global networks. These principles include global interoperability (compatibility) and even openness of networks. But the process, unfolding largely in the 1980s but extending into the 1990s, involved complex tussles among ISO, the ITU and its principal constituents, the PTTs, U.S. telephone companies, corporate representatives (notably IBM, with a major stake in SNA), and the heterogeneous supporters of the Internet protocol suite known as TCP/IP. Complicating the picture was telephone-company development of standards for enhancing telephony networks, such as the integrated services digital network (ISDN) and asynchronous transfer mode (ATM) technology. But to simplify discussion, a major focus of international negotiation was on the relative merits of TCP/IP vs. OSI, which the former ultimately dominated.11
The OSI Reference Model describes a seven-layer architecture defining functions, services, and interfaces for data-communications systems. There is a related family of OSI protocols, such as X.25 and X.400 for messaging and X.500 for directories, that implement what is described in the reference model and that have been developed through conventional standards-setting processes.12 Hence, OSI has been described as a "meta-standard" rather than a conventional set of interoperability standards.13 A critical dimension of OSI is the European prominence in its development, measured by the locus of key engineering activities and ISO's location and environment. In particular, European computer vendors, European governments, and the Commission of the European Union saw the OSI program as an instrument of industrial policy to protect European manufacturers--which already were major vendors of proprietary network solutions--from the predominance of IBM and other U.S. firms. The effort to provide an alternative, or even the prospect of one, was intended by some to arrest the widespread deployment of IBM's SNA network standard in Europe.14
The interplay between OSI and TCP/IP was not straightforward;15 recollections of those involved reveal a fair amount of acrimony there as well. Yet experience and familiarity with TCP/IP, the development of which had been documented publicly since 1969 through Requests for Comments (RFCs), clearly contributed to OSI.16 The articulation of seven layers, for example, is an elaboration of the traditional four layers ascribed to basic Internet technology. That description, plus specific references to layers 1 through 7, is widely used in discussing internetworking today, though otherwise the terminology tends to be different.
In the end, however, some of the concrete engineering design work of OSI standards committees was acknowledged or absorbed by the Internet Engineering Task Force (IETF). In 1989, for example, the IETF Open PDN Routing Working Groups addressed internetworking involving X.25-based PDNs using X.121 addressing, and the Network Working Group even proposed experimentation with OSI network layer protocols over the Internet and the creation of an experimental OSI Internet (RFC 1070).
As these examples illustrate, there were individuals who were "bilingual" in these standards environments, people who attempted to work on some kind of coordination, if not integration, of approach. But they were not in the mainstream of Internet technology development.17 Although TCP/IP was adopted as a U.S. military standard (around 1980), the contention with OSI in the United States came to a head in the late 1980s when the National Institute of Standards and Technology promulgated a Federal Information Processing Standard that related OSI to U.S. government needs--the climax of the Government Open Systems Interconnection Protocol (GOSIP) initiative. GOSIP crystallized evolving concerns with OSI in the technical and business communities, and its demise in 1994 (through a finessing that offered a choice between it and TCP/IP in government procurement) marked the end of serious U.S. consideration of OSI.18
By the mid-1990s, the market preference for TCP/IP was clearly established, in part because of its comparative simplicity, which facilitated development of commercially viable products across a range of computing platforms.19 By the late 1990s, the penetration of TCP/IP technology into private (e.g., corporate) networks was reflected in the use of the term "intranets," an obvious play on "Internet."
Tensions and misunderstandings among OSI and TCP/IP proponents had as much to do with attitudes toward standards-setting as about technology or even international competition. People in the Internet community have historically looked askance at telephony standards-setting, associating it with the slow progress that helped to occasion their own work and with highly bureaucratic and time-consuming procedures. By contrast, the Internet standards-setting process focused on working implementations. The philosophy was articulated by MIT's David Clark (the original Internet architect and architecture board leader), who had been involved in protocol development since the 1970s: "We reject kings, presidents, and voting. We believe in rough consensus and running code." This widely repeated characterization, voiced at a 1992 meeting, has been echoed as a motto by many and codified in official documentation of the Internet's architectural principles--themselves embodied in the TCP/IP protocol suite--and of the IETF's approach to their implementation.20
The IETF's roots date back to the ARPANET. Its RFCs were initiated to facilitate quick dissemination and discussion of the ideas and technical specifications that had been suggested by members of what was then a small but geographically dispersed networking community, funded by the Advanced Research Projects Agency (ARPA).21 If a suggested protocol seemed interesting, it was likely that someone would implement and test it. Implementations that proved useful were copied to similar systems on the Net.
In this way, the number of technical specifications and the number of people involved in "standardization" grew. Everyone who was interested and had access to the ARPANET could participate, and the results were available free of charge. With more and more people getting involved in Internet standardization, however, the IETF procedures and the standards-approval procedure became somewhat formalized. Only since 1992 has the term "standard" been officially used for technical specifications that have completed the full process of standardization (RFC 1311).
The IETF is now split into more than 100 working groups covering eight to ten functional areas. Working groups can be easily created, and most of them are dissolved after they have finished their task. In contrast with most standardization organizations, participation in the IETF and its working groups is open to anyone. A formal membership is not required. Broad and unrestricted discussion of the proposals via electronic discussion groups and mailing lists is possible. Before Internet standards are approved, at least two independent implementations must have been completed. They must work and must be interoperable.
The success of TCP/IP does not imply that the way it developed could be emulated. By the mid-1990s, the IETF was under strain, reflecting growth in the number of participants and a diversification of interests in developing and implementing the technology. New users, service providers, and network operators make it much more difficult to use the same informal consensus mechanism as before to coordinate further technical changes in the system. Development of the protocols for the Web, for example, has proceeded under the auspices of the World Wide Web Consortium, which coordinates with the IETF but is a membership organization. And a variety of industry-based consortia have emerged to address specific kinds of technology and expedite the standards-development process.
The difficulties encountered in attempting to move to a new generation of the TCP/IP protocol stack, which would enlarge the address space significantly and add other features, illustrates the problem.22 Although the new protocol (IP Version 6) was adopted by the IETF, and although backwards compatibility with IP Version 4, now widely in use, is guaranteed, not many are ready to migrate from a good to a better technical standard; they hesitate to incur switching costs because no authority can guarantee that everyone else will also switch. In the old NSFNET days, the decision to switch to a new protocol would have been comparatively easy because the National Science Foundation (NSF) could stipulate it as a condition for those who wanted connection to this attractive network. As another committee of the Computer Science and Telecommunications Board observed, "[f]or the Internet, . . . the explicit government directive to set standards [in the early ARPANET period, the beginning of the Internet] has been replaced by a process driven by vendor and market pressures, with essentially no top-down control. . . . Currently, the Internet community seems to make short-range decisions with some success, but long-range decisions, which reflect not only immediate commercial interests but also broader societal goals, may not get an effective hearing."23
Virtually the only public global data network open to corporate and personal use is the Internet. The fact that the Internet comprises thousands of technically distinct networks is a direct result of the design of the TCP/IP protocol suite, which allows Internet services to be run on top of networks based on other protocols, such as X.25, SNA, and Ethernet. The Internet's architecture and standards separate applications (from the Web to Internet telephony) from the underlying infrastructure, whereas conventional telephony grew as an application that was tightly coupled to its infrastructure. In telephony, the "intelligence" that made applications possible was based in equipment inside the network; in the Internet, that intelligence is largely in the software running on equipment attached by users at the "ends" of the network.24
That the Internet standards were developed in an open process facilitated their diffusion; broad participation was possible and use of the standards was unencumbered. Meanwhile, implementations by hardware and software vendors could be proprietary, contributing to the profitability of many businesses built on this technology.25 There has also been openness of a sort in the business of Internet service provision: No single network operator or service provider owns or controls "the Net," and this network of networks essentially constitutes an "unmanaged" system.
That characterization is a mixed blessing. As a union of different networks, the experience of a user communicating across multiple networks may devolve to a lowest common denominator--one slow-speed or low-quality segment can degrade the whole experience. This problem is of particular concern for users (a minority today) with applications that demand high speed or minimal delay. Thus, the Internet's technology and architecture do not make it a uniform experience--one reason why large providers such as America Online or UUNET have been trying to grow larger and provide complete end-to-end communications, much as the VANs did.26
As competition for customers has grown among ISPs, and concern about service quality along with it, connections among networks have become problematic; ISPs tend to discriminate among networks in making judgments about whether and on what terms to effect interconnections.27 Although the technology makes interconnection--internetworking--easy in principle, business decisions have made it complicated in practice since the mid-1990s. At the same time, the growing use of the Internet increases its value as infrastructure for a growing body of users and uses. This, in turn, will increase pressures for some kind of management and/or coordination system, as well as for mechanisms to support enhancements to quality of service (which might minimize delay for critical applications, for example).28
In its early commercial period, the Internet architecture has promoted a horizontal pattern of organizations, in contrast to the more vertical, hierarchical, and controlled world of telephony. The ease of user attachment to the Internet makes it comparatively easy now to set up links to a variety of sources of information and entertainment--and to exchange information and communicate with other users--all without being tied to a single service provider. Even the smallest enterprise thus has the potential to achieve a global market presence. But this potential can be misleading, as new issues are arising now that a growing number of enterprises and individuals have figured this out. How, for example, can all these players compete for attention, or even be found, in an increasingly crowded Internet marketplace? And how do small enterprises thrive in the face of an economics that continues, as time progresses, to promote consolidation in markets for both suppliers and users of Internet technology? Chapter 7 on commerce has a related discussion.
Different social values and goals have influenced the evolution of data networks. Systems engineers, military leaders, business executives, policymakers, and private users--whether consciously or otherwise--shape the technical characteristics of these communications systems through technical proposals and practical decisions that reflect their individual needs, preferences, and world views. Once these values become "embedded" in the implementation of a particular network, they become latent, not only in the technical standards themselves but in the operating procedures, social conventions, and behavioral norms that develop among the technology's users.
These phenomena are not unique to the Internet; they are evident across a wide range of technologies, such as various forms of manufacturing automation, and reflect the very human processes of technology design and use. They arise because standards affect the architecture of information. As Libicki notes, standards promote different patterns regarding who is connected to whom and what is expressed easily or not; social relations vary, depending on whether a communications protocol is top-down or bottom-up; and the choice of programming language affects the relationship of programmers to their managers.29
Like protocol standards, many nontechnical norms facilitate compatibility and interoperability among users. Because they, too, have "positive network externalities" that promote their de facto acceptance, this larger structure of technological and social practices acquires considerable inertia and hence becomes difficult to change.30 Moreover, the stronger the complementarities among the component elements of the resulting structure, the more likely it is to undergo gradual adaptations through incremental modification rather than radical change.
This is the sense in which one may speak of particular features of the systems' software components, or of certain conventions among network users, as having become "locked in." But it is important to emphasize that the extent of "lock-in" depends on the degree of complementarity among components. TCP/IP illustrates the point. Because the Internet, compared to other communications networks, can tolerate great heterogeneity among the components that may be interconnected, it permits a greater diversity of practices and associated values among its users, facilitating the inclusion of a broad range of systems in the overall network. This inclusiveness can inhibit radical modification of the system's underlying technologies. This is another face on the Internet as infrastructure: broad dependence can slow evolution.
Interconnection--inter-networking--is thus the most obvious value in the Internet, though just one stage in a complex evolution. It embodies one possible technical solution--albeit a solution that was reinforced by its comparative generality and flexibility--to interconnecting host computers of different kinds of networks based on different technologies. The Internet's principal forerunner, the ARPANET, developed while alternative solutions were being worked on or were already available. They included the efforts of computer vendors such as the PARC XNS protocol from Xerox and the Unix-to-Unix copy protocol (UUCP), originally developed in the Bell Labs of AT&T and licensed out at very low cost. These efforts reflected a perceived need among researchers to interconnect the technically diverse networks that had already come into existence, at least in the United States.31
An intentional expansion of networking into the research community--a broadening of interconnection--was enabled by NSF's mid-1980s launch of NSFNET, a network that connected six research supercomputer centers and became a backbone (along with networks established by the Department of Energy and the National Aeronautics and Space Administration) for the larger network of networks.32 A crucial early decision of NSF, after intense internal negotiations, was to base NSFNET on TCP/IP.33 Another important decision led NSF to foster development of regional networks, which aggregated traffic from and provided technical support to smaller networks such as campuses.34
These regional networks evolved in the second half of the 1980s, and many of them were cosponsored by business organizations that, within certain limits, were allowed to use the networks for commercial purposes. Thus, a new type of hybrid network appeared on the landscape of data networks, and with a very different set of users. Some of the regional networks spun off commercial networks--for example, NYSERNET in the New York area produced PSI--but most of the original regional networks faded away after the removal of NSF support (prompted by the commercialization of the backbone), the decommissioning of NSFNET in 1995, and the rise of commercial ISPs. During their heyday, however, these regional networks and their local tributaries--based on local area network (LAN) technology, X.25, SNA, DECNET, and other systems--were heterogeneous both socially and technically.
Interconnection was only one dimension; just as important was the motivation behind it. This included not only the need for communication per se but the sharing of information and computational resources. These factors were illustrated--and explored--in the expansion of LANs. As the number of workstations and personal computers in businesses and universities began to grow, vendors began to develop network technology to interconnect the computers and make it possible for users to share information and move files.35
Many organizations adopted Ethernet, token ring, or token bus technology and built their own (isolated) networks, which were used for internal purposes and initially not designed for connection to external networks. But in the second half of the 1980s, more and more LANs, including many campus networks, were linked to the Internet, while corporate LANs were linked to private wide-area networks. These developments, and their influence on how and where people did their work, stimulated business executives to rethink strategies of vertical integration. They began to consider technology-based alternatives--such as decentralization, outsourcing via inter-organizational networks, and the creation of the networked firm--to the traditional model of the corporation.
As networks evolved outside the business domain, the interplay of values and technology was even more apparent. Most of the noncommercial networks interconnected universities, governmental and non-governmental organizations, and eventually private households, self-help groups, and the like. These noncommercial networks grew up among user communities with similar interests--initially, people who were active users of particular computer systems or software and who wanted to communicate with kindred spirits. Such systems could be called "cooperative networks," because users or user organizations were involved in setting up the network and coordinating its functions, even though a traditional network provider sometimes operated it.
Notable cooperative networks that evolved in the research and education communities were the Computer Science Network (CSNET) and BITNET. E-mail communication, which unexpectedly had proved to be the most popular application of ARPANET, was the dominant service in CSNET and BITNET as well. Supported with limited funds from the National Science Foundation, CSNET connected computer science departments that had no access to the ARPANET and therefore lacked sophisticated facilities to communicate, collaborate, and share ideas.
Early on, computer scientists formed a kind of community, initially built around the time-shared computer and later around programming languages, operating systems, and computer networks.36 Their sense of being pioneers in a revolutionary change of information-processing shaped the spirit of collaboration, informality, and even social responsibility behind the values and rules that guided these researchers' use of networks in the late 1970s and early 1980s.37 However, by the late 1980s, the increase in NSF support for networking by other kinds of scientists led to concern among computer scientists that their own support could be eroded. Here, sharing collided with competition for resources.38
BITNET was an extragovernmental effort. Based on IBM technology, this completely decentralized network was set up by universities and research centers to facilitate information exchange between faculty, students, and administrative staff. BITNET extended the computer-communications infrastructure beyond CSNET, both in terms of the number and kind of people connected--that is, it involved a much wider range of researchers than computer scientists alone. BITNET was associated with EDUCOM, a nonprofit consortium of higher-education institutions that facilitated access to information resources in teaching, learning, scholarship, and research.
All the early academic and research networks fostered e-mail discussion vehicles. Most remarkable was the UUCP-based USENET, a system of newsgroups (bulletin boards) that was originally designed as a forum in which UNIX users could discuss their problems and assist each other. Very soon, USENET grew into a platform for a broad variety of newsgroups, including anti-authoritarian student groups and hacker communities.39 USENET relied on self-organization and also on self-restraint. Many of its rules and norms gave rise to an informal code of conduct for Internet users--such as "never disturb the flow of information" and "every user has the right to say anything and to ignore anything"--that is sometimes referred to as Netiquette. This code was viewed by those who adopted it as a natural extension of the fundamental values of American society, such as freedom of speech.
As the complex of research and education networks grew, federal program managers became concerned about the ways in which government-funded infrastructure would be used. The result was an effort, at least by the program managers, to limit usage of the early Internet components, and notably of NSFNET, by means of an "acceptable use policy" (AUP). In practice, enforcing an AUP was difficult; it depended on an honor system. And it effected a distinction between those with legitimate access and those--typically, parties seeking commercial gain--without it.
Practically, though, the more people experienced the communications capability and information access afforded by the Internet, the more they wanted to use it; differentiating sanctioned research and education activities from other uses seemed increasingly arbitrary and artificial. Avoiding the effect of the AUP, in fact, was one reason for the commercialization of the Internet backbone and the decommissioning of NSFNET in 1995. To enable that transition, NSF provided seed funding for public--as opposed to the intragovernmental--network-traffic exchange points (network access points), at which multiple providers of private backbones could interconnect. This step promoted interconnection and the prospect of multiple ISPs; commercial ISPs, meanwhile, had banded together to underwrite their own exchange facilities, the Commercial Internet Exchange (CIX).40
In contrast to the multifaceted growth of computer networks in the United States, progress was slower overseas. Although both the CSNET and BITNET networks, as well as the ARPANET,41 had links to Europe, the long-lasting European aversion to TCP/IP appears to have been a crucial reason why research and education networks developed slowly there and never transformed themselves into commercially viable networks. It took TCP/IP and its supporters another half decade to achieve acceptance in continental Europe.42 The German example may be the best illustration of policy failure in this respect; industrial policy goals, a commitment to "open standards," and the conviction that it could catch up with the technology leader by a solo effort more or less ensured this failure.
IBM's initiative to launch and sponsor the European Academic and Research Network (EARN, an extension of BITNET) in the early 1980s was welcomed by the research community. Within 6 months after EARN started operation, 75 mainframes had been interconnected in Germany. By the mid-1980s, more than 500 computers in the research organizations of 19 countries were linked to EARN. This system required permission from the European telecommunications monopolies, which up to that time had never agreed to allow data to be transmitted across national borders through private lines. The EARN board of directors had to struggle hard to get that permission. There were also problems with governments. They were concerned that EARN might be part of IBM's strategy to expand market dominance--not least because initially only IBM mainframes and Digital Equipment's VAXes could be linked via EARN. Although some governments had no objection to the project, others, among them the Germans, were only willing to give the green light on the condition that EARN evolve into an OSI-based system. This was agreed to, but it never actually materialized.
The German government reacted to EARN by initiating the Wissenschaftsnetz (Science Network). This was consistent with government programs to support the German computer industry and its most prominent corporation, Siemens. Technical standards played a significant role in this strategy. In a concerted action, most European governments had declared support for open, nonproprietary standards based on the OSI frame of reference. Germany thus insisted that the Wissenschaftsnetz be based on OSI standards as well. It was managed by the Verein Deutsches Forschungsnetz (German Research Network Association), whose members were drawn mainly from universities and large nonacademic research institutions.
In contrast with the United States or the United Kingdom, where dedicated organizations were charged with operating the network, in Germany the PTT--the Bundespost (Telekom)--was regarded as the "natural" candidate to provide this service. The Bundespost had the monopoly right for networks but extremely little experience with packet-switched data communications.43 Like practically all of its European partners, the Bundespost was committed to OSI. Thus the Wissenschaftsnetz was embedded in an institutional structure controlled by people who were not only not open to alternatives to OSI but rigorously rejected applications from computer scientists and software engineers to support TCP/IP-related R&D.
Although--or because--OSI was shielded from competition, it took years until products based on OSI standards were available that had any appeal for users X.400-based e-mail software, for instance, did not appear on the market until TCP/IP-based products were well established.44 That lack of competition, together with its reliance on an official and rather slow OSI standardization process and its failure to involve users in the development of products and services, greatly limited the German research and education network's evolutionary dynamics. Thus the Wissenschaftsnetz did not attract many users.
Today, Internet service providers, telephone companies, cable-television companies, wireless device and service companies, computer hardware and software vendors, media companies, and all kinds of firms engaged in e-commerce use the Internet as an infrastructure and a business channel. This has again changed the character of the Internet, but it has not done away with its fundamental characteristics: decentralization, user involvement, openness, and self-organization (by which is meant a network infrastructure designed to allow groups to organize themselves to use it). It would be misleading, however, to infer that today's "community" of Internet users is homogeneous, cohesive, collegial, or has values, objectives, and skills similar to those of the early Internet pioneers. But this only means that, from a culture and values perspective, the Internet has become even more heterogeneous than it was a decade ago.
What has been emphasized in the preceding sections of this chapter is how the characteristics of the Internet derive from its development path, how users have influenced that path of development, and how the resulting network has and will continue to influence the users. An important further question to consider is how a system with these technical and social characteristics can be "coordinated." Depending on who is addressing the issue, that word may mean managed, governed, or regulated. But each of those terms is value-laden and can generate significant resistance, especially among those most closely associated with the Internet's origins.45
It remains to be seen whether the anti-hierarchical culture of information freedom that is so much a part of the Internet's history will survive and continue to animate resistance to local regulation. This history feeds the perception, in some quarters, of the Internet as socio-technologically so distinct from other communication systems that it should be treated as a special case--that it should not be brought into the established framework by which sovereign states and local political entities have long sought to control or regulate access to information.
A source of countervailing economic and political pressure--indeed, a countervailing "culture"--has been created by the policy decision to promote this technology as the communications infrastructure for interactive electronic commerce. In many countries, the Internet is seen as a positive force for economic development, generating acceptance despite concerns about some of the content that may be communicated or uses that may be supported. As a consequence, the efforts of technologists, entrepreneurs, and international-business-law specialists to facilitate greater use of the Internet for conducting public business are being widely hailed today as socially beneficial innovations.
Among the likely consequences of these developments is reinforcement of the dual trend toward, on the one hand, adapting the Internet for more secure and private point-to-point communications, and, on the other, using it as a medium for mass broadcasting of video, music, and text. The alignment of business interests with the first of these would seem to favor technologies that weaken the abilities of government to monitor and control content in interactive communications, an area that sovereign states have traditionally been less disposed to regulate.46 Still, business organizations have long ago learned to accommodate themselves to the greater sensitivities of governments regarding unregulated broadcasts that might carry unwelcome news or disruptive messages.
Meanwhile, at the international level, the ITU (like other nongovernmental organizations) has been assessing how it could respond to the upheaval in telecommunications and related activities, from standards setting to broader policymaking, associated with the Internet. The failure of OSI to achieve commercial success and the success of TCP/IP commercially sent clear signals about process problems. An indication of the potential for change, or at least a willingness to consider different approaches, is a 1998 ITU document that observes: "Competition in telecommunications is rapidly becoming a true market force whose evolution cannot be planned by policymakers, a force which increasingly is seen as best regulated on the basis of principles that are not specific to telecommunications but derived from a broader economic, social and cultural perspective."47
Some of the concerns posed by the Internet relate to its technologies for distributing information, which affect private parties with property interest in certain content as well as governments interested not only in protecting those private-property rights but also in meeting their own mission needs (which may be expressed in differing degrees of support for distribution of different kinds of content to different segments of the population). The Internet presents "the culture of sovereign control" with technical challenges that are not present when information comes embodied in conventional, tangible media such as newspapers, books, films, phonograph records, and audiocassette tapes or when signals are transmitted through physical channels, as is the case with telegraph, telex, and telephone messages. Even with regard to the broadcast media, domestic regulations and international conventions that restrict broadcasters to particular frequencies also serve to make it more feasible to identify, and interfere with, transmission and reception of particular sources of radio and TV messages.
Nevertheless, increasing commercial use of the Internet is driving both technical changes and consideration of nontechnical interventions. As Lessig has put it, the "changes that make commerce possible will also be changes that will make regulation easy."48 For example, company interests in understanding customer behavior have driven the design of mechanisms for collecting personal information; this has led to increased privacy concerns, and to experimentation with technologies to permit anonymous interactions. Large organizations that use the Internet have developed firewalls, which can limit traffic coming into and going out of the organization's network, as well as software--e.g., e-mail filters--to monitor the kinds of communications that network users send. Meanwhile, governments are contemplating use of the Internet for criminal purposes and the feasibility of eavesdropping or otherwise intervening in communications. All of these developments raise questions about whether the fundamental Internet architecture principles can be preserved.49
Do these developments pose a threat to local values? The analysis in this chapter suggests that the increasing dominance of a commercial culture on the Internet will be likely to produce a situation in which local jurisdictions will have considerable autonomy as well as greater technical capabilities to restrict local consumption of Internet content, at least as long as they do not use that power in blatant efforts to protect local commercial enterprises from the competition of politically powerful international media organizations. The implication is that the variations in local cultural norms are unlikely to be swept aside by the future spread and penetration of Internet-based services.
1 Of course, given the limitations on resources embodied in the first connected computing systems, an early feature was simply logging in, which could be enhanced to support different levels of security in access control. Attempts to introduce security for specific contexts or user groups date from the 1970s. See Stephen S. Kent, 1999, "Security and the Internet (circa 1980-1990)," ACM SIGCOMM Tutorial: A Technical History of the Internet, available online at <http://www.cs.utexas.edu/users/dragon/sigcomm/tl/kent.slides.ppt>.
2 Ithiel de Sola Pool. 1983. Technologies of Freedom. Cambridge MA: Belknap Press.
3 These fundamental attributes of the Internet have been chronicled in a series of Computer Science and Telecommunications Board reports: Realizing the Information Future (1994), The Unpredictable Certainty (1996), and The Internet's Coming of Age (2001), all National Research Council reports published by the National Academy Press, Washington, D.C.
4 David J. Farber. 2000. "Predicting the Unpredictable--Technology and Society," in Understanding the Impact of Global Networks on Local Social, Political and Cultural Values, Christoph Engel and Kenneth H. Keller, eds., 29-37. Baden-Baden: Nomos.
5 Tom Standage. 1998. The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century's On-line Pioneers. New York: Berkeley Books.
6 Ithiel de Sola Pool, ed., 1978, The Social Impact of the Telephone, Cambridge MA: The MIT Press; Christian Pinaud, 1985, Entre Nous, les Telephones. Vers une Sociologie de la Telecommunication, Paris: Insep Editions; Claude S. Fischer, 1992, America Calling: A Social History of the Telephone to 1940, Berkeley, CA: University of California Press.
7 Provision of international service involved arrangements with international record carriers for international transit service and arrangements with PTTs for local points of presence and/or gateways to local data networks.
8 Marvin A. Sirbu and Laurence E. Zwimpfer. 1985. "Standards Setting for Computer Communication: The Case of X.25. A Detailed Examination of the Development of X.25," IEEE Communications Magazine 23:35-45.
9 Marjory S. Blumenthal. 2000. "Architecture and Expectations: Networks of the World-Unite!," The Promise of Global Networks, Jorge Reina Schement, ed., 1-52. Washington, D.C.: Aspen Institute. Available online at <http://www.aspeninst.org/publicationsl/bookstorecommunications-promise.asp>.
10 U.S. Congress, Office of Technology Assessment. 1992. Global Standards: Building Blocks for the Future, TCT-512. Washington, D.C.: U.S. Government Printing Office, March, pp. 12-14.
11 Susanne K. Schmidt and Raymund Werle. 1998. Coordinating Technology: Studies in the International Standardization of Telecommunications. Cambridge, MA: The MIT Press.
12 Todd Shaiman. 1995. The Political Economy of Anticipatory Standards: The Case of the Open Systems Interconnection Reference Model. University of Oxford M.Sc. Thesis in Economic and Social History. September.
13 Paul A. David and Mark Shurmer, 1996, "Formal Standards-setting for Global Telecommunications and Information Services. Towards an Institutional Regime Transformation?," Telecommunications Policy 20(10):789-815; Paul A. David, 2000, "The Internet and the Economics of Network Technology Evolution," Understanding the Impact of Global Networks on Local Social, Political and Cultural Values, Christoph Engel and Kenneth H. Keller, eds., 40-71, Baden-Baden: Nomos.
14 In the same way, many in the European telecommunications industry came to regard ISDN not only as the route to providing a seamless means of data communication via telephone lines, but also as the means of stabilizing the PTT monopolies on the eve of deregulation and liberalization. The peculiar combination of abstract principles and concrete economic interests may be one reason why neither OSI-based networks nor ISDN was ever translated into an integrated system diffused widely within national networks, let alone on a global scale. See Paul A. David and W. Edward Steinmueller, 1990, "The ISDN Bandwagon Is Coming, But Who Will Be There to Climb Aboard?," Quandaries in the Economics of Data Communication Networks, Economics of lnnovation and New Technology, 1:43-62.
15 T.M. Egyedi. 1999. "Tension Between Standardisation and Flexibility Revisited: A Critique," Standardisation and Innovation in Information Technology SLIT 1999, Proceedings of the Ist IEEE Conference on Standardisation and Innovation in Information Technology (SIIT 99), Aachen, Germany, September 15-17, 1999, Kai Jakobs and Robin Williams, eds., 65-74. Piscataway, NJ: IEEE.
16 David M. Piscitello and A. Lyman Chapin, 1993, Open Systems Networking: TCP/IP and OSI. Addison-Wesley Professional Computing Series; T.M. Egyedi, 1999, "Tension Between Standardisation and Flexibility Revisited: A Critique," Standardisation and Innovation in Information Technology SLIT '99. Proceedings of the Ist IEEE Conference on Standardisation and Innovation in Information Technology (SIIT 99), Kai Jakobs and Robin Williams, eds., 65-74, Piscataway, NJ: IEEE.
17 This social attitude and discrimination were evident in the criticism lobbed occasionally at the original executive director of the Internet Society, who had a history of involvement in standards setting both at U.S. and international organizations.
18 Shirley M. Radack. 1994. "The Federal Government and Information Technology Standards: Building the National Information Infrastructure," Government Information Quarterly 11(4):373-385.
19 It has been observed that standards can benefit from a bandwagon effect, and this was the case with TCP/IP, for which software and product development grew steadily. See Martin C. Libicki. 1995. Standards: The Rough Road to the Common Byte. Washington, D.C.: Center for Advanced Concepts and Technology, National Defense University.
20 Brian Carpenter, ed. 1996. "Architectural Principles of the Internet," Internet Engineering Task Force RFC 1958. Available online at <http://info.intemet.isi.edu/in-notes/rfc/files/rfcl958.txt>.
21 Janet Ellen Abbate. 1999. Inventing the Internet. Cambridge MA: The MIT Press, pp. 73-74.
22 See CSTB, The Internet's Coming of Age, 2001.
23 See CSTB, Realizing the Information Future, 1994, p. x.
24 For a fuller explanation of the Internet's generality, flexibility, and architecture, see CSTB, Realizing the Information Future (Chapter 2) and The Internet's Coming of Age.
25 Ironically, despite the openness and bottom-up character of Internet standardization, it is not easily accessible to outsiders. Only insiders understand what is being negotiated in Requests for Comments and discussed in developers' meetings. A meritocracy, the IETF pioneers believed that there were technical solutions for every problem, and that a solution can't be the optimal one if it needs to be voted on.
26 See CSTB, The Internet's Coming of Age, 2001.
27 See CSTB, The Internet's Coming of Age, 2001.
28 See Blumenthal, "Architecture and Expectations: Networks of the World-Unite!," 2000.
29 See Libicki, Standards: The Rough Road to the Common Byte, 1995.
30 Karl Wameryd. 1990. "Conventions: An Evolutionary Approach," Constitutional Political Economy, 1:83-107.
31 Katie Hafner and Matthew Lyon, 1996, Where Wizards Stay Up Late: The Origins of the Internet, New York: Simon & Schuster; John S. Quarterman, 1990, The Matrix. Computer Networks and Conferencing Systems Worldwide, Bedford, MA: Digital Press; Peter H. Salus, 1995, Casting the Net. From ARPANET to INTERNET and Beyond, Reading, MA/ Menlo Park, CA: Addison-Wesley.
32 See Computer Science and Telecommunications Board, National Research Council, 1988, Toward a National Research Network. Washington, D.C.: National Academy Press.
33 Juan D. Rogers. 1998. "Internetworking and the Politics of Science: NSFNET in Internet History," The Information Society 14:213-228.
34 See CSTB, Realizing the Information Future, 1994.
35 Computer Science and Telecommunications Board, National Research Council. 1999. Funding a Revolution: Government Support for Computing Research. Washington D.C.: National Academy Press, pp. 169-183.
36 Arthur L. Norberg and Judy E. O'Neill. 1996. Transforming Computer Technology. Information Processing for the Pentagon, 1962-1986. Baltimore: Johns Hopkins University Press. See also CSTB, Funding a Revolution, 1999.
37 Volker Leib and Raymund Werle. 1998. "Computemetze als Infrastrukturen und Kommunikationsmedien der Wissenschaft," Rundfunk and Fernsehen 46(2-3):254-273.
38 See CSTB, Funding a Revolution, 1999.
39 Michael Hauben and Ronda Hauben. 1997. Netizens. On the History and Impact of Usenet and the Internet. Los Alamitos, CA: IEEE Computer Society Press.
40 See CSTB, Realizing the Information Future, 1994. Also see Juan D. Rogers, 1998, Inter-networking and the Politics of Science: NSFNET in Internet History, The Information Society 14:213-222; Carl Malamud, 1993, Exploring the Internet. A Technical Travelogue, Englewood Cliffs, NJ: Prentice-Hall.
41 The Joint Academic Network (JANET) was set-up in Britain and used by universities, the Ministry of Defense, and research organizations. See Malamud, Exploring the Internet. A Technical Travelogue, 1993.
43 Raymund Werle. 2000. "The Impact of Information Networks on the Structure of Political Systems," Understanding the Impact of Global Networks on Local Social, Political and Cultural Values, Christoph Engel and Kenneth H. Keller, eds., 167-192, Baden-Baden: Nomos.
44 Schmidt and Werle, Coordinating Technology, 1998, pp. 230-243.
45 See Blumenthal, "Architecture and Expectations: Networks of the World-Unite!," 2000.
46 Ithiel de Sola Pool. 1990. Technologies Without Boundaries: On Telecommunications in a Global Age. Cambridge, MA: Harvard University Press, Chapter 7.
42 Malamud, Exploring the Internet. A Technical Travelogue, 1993.
47 International Telecommunications Union, Draft Strategic Plan for the Union 1999-2003, dated 1998. Available online at <http://www.itu.int/newsroom/press/PP98/Documents/StratPlan9903.html>.
48 Lawrence Lessig. 1999. Code and Other Laws of Cyberspace. New York: Basic Books, p. 30.
49 See David D. Clark and Marjory S. Blumenthal, 2000, "Rethinking the Design of the Internet: The End to End Arguments vs. the Brave New World," presented at TPRC, September 24; available online at <http://www.tprc.org/AgendaOO.htm>.