Internet technology has developed in a rapid series of leaps to become one of the most fundamental communications infrastructures in modern society. When referring to the Internet as a communication infrastructure it is important to remember that the Internet is in reality a technological base upon which a multitude of software applications rely in order to provide users with the communication tools they desire. Despite this it is important to realize that the Internet is not a single thing. It is a name that collects a cluster of technological innovations, groups of infrastructure(s), applications, and social organizations, which we commonly refer to as the Internet. The Internet is not one innovation, but a rapid series of steps, innovations, and collaborations within the field of communication technology. In addition to this the Internet, as a phenomenon, is constantly technically evolving.
Among the reasons for the prominence of the Internet as a communication infrastructure is the multitude of design choices that have been made out of necessity and conviction by groups of developers over half a century. This article provides an overview of these developments and some of their implications.
The Internet As A Complex Innovation
First it is important to clarify the difference between the Internet and the software applications that are used. Understanding this difference is important since the two levels of technology are regularly confused with each other. As early as 1998 Tim Berners-Lee, the father of the world wide web, realized that there was a need to make this distinction when he wrote (1998, n.p.):
The Web is an abstract (imaginary) space of information. On the Net, you find computers – on the Web, you find documents, sounds, videos . . . information. On the Net, the connections are cables between computers; on the Web, connections are hypertext links. The Web exists because of programs that communicate between computers on the Net. The Web could not be without the Net.
Thus, the Internet is made up of the hardware (e.g., the computers, routers, cables, and modems) and the protocols required to enable communication between the machines. The Internet is therefore the base infrastructure used by software applications such as email, the world wide web, messaging, and file sharing.
The Internet is more than one technological system, artefact, or innovation; it is a whole range of technical, social, and organizational innovations in the field of information technology, which have erratically and drastically evolved into our present-day information and communication infrastructure (Castells 1996; 2001). The haphazard manner in which this information infrastructure (Hanseth 1996) has evolved plays a significant role in the way it is used, controlled, and developed.
In addition, the open nature and unplanned development (Ciborra 1992; Dahlbom & Janlert 1996; Hanseth 1996) has led to a haphazard development of a so-called autonomous technology (Winner 1978). In other words, the development of technology through open standards leads to a technology that lacks, or seems to lack, conscious control and definition. When we write about the Internet we indulge in a simplification or a convenient fiction (Kling et al. 2005) as we reduce the different aspects and complexities until we achieve a manageable subject.
The fundamental idea underpinning the Internet infrastructure is the creation of a nondiscriminatory mode of transportation. This is defined as the end-to-end principle of the Internet protocol (IP). It states that, whenever possible, communication-protocol operations should be defined to occur at the end points of a communication system (Saltzer et al. 1984). The development of this idea led to the creation of a communication system that was not concerned with the content of what was being transported as long as what was transported followed the correct transportation procedure. The effect of this is that social interaction that is conducted via the Internet is in some sense “freer” than many alternatives since it is not constrained by the technology. To put it another way, the constraints and enabling factors of the technology are nondiscriminatory with respect to the designs of the user (Lessig 1999). A helpful image is that of the freight container, which can be used to transport just about anything that fits into its standardized requirements.
Since the Internet provides for open, nondiscriminatory transportation of data it has the ability to be used as a platform for a seemingly endless array of content. With the addition of speed the exchanging of content becomes more or less instantaneous and this brings with it the appearance of computer-supported simultaneous interaction.
Historical Development
Computers were connected together prior to the Internet, but most computers had a limited ability to communicate to outside networks. In certain situations networks could be connected through gateways or bridges but these were often technically limited or built for a single use. A common networking method was linking terminals by leased lines to a central mainframe computer.
Researchers were aware that such technological constraints would be overcome in the future, as indicated by Licklider’s observations (1960, 7):
It seems reasonable to envision, for a time 10 or 15 years hence, a “thinking center” that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions . . . The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users . . . In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.
In 1962 Licklider was the head of the US Department of Defense’s informationprocessing office, the Advanced Research Projects Agency (DARPA). DARPA had access to three different computer networks, each with different user commands. In order to communicate across networks DARPA officials had to physically move to a different terminal and use the specific commands for that network. The need for inter-networking was evident. The solution was to create a terminal that would enable users to communicate with different networks without needing to physically change to another terminal or to implement network-specific commands.
An important component in resolving the inter-networking issue was the ability to connect the separate physical networks to form a logical network. In part this problem was solved in the 1960s by a group of researchers who developed and implemented the idea of packet switching. The origins of packet switching lie in theoretical ideas of advanced network decentralization. The purpose of this was to decrease network vulnerability by ensuring that communication networks functioned even if parts of the network were damaged (Baran 2002). This is the source of the persistent idea that the Internet was developed to survive a nuclear attack.
Arpanet
The first implementation of Licklider’s idea of an inter-networking system was the ARPANET. An inter-networking system joined the University of California, Los Angeles, the Stanford Research Institute, University of Utah, and the University of California, Santa Barbara, in a four-node network by the end of 1969. This original four-node network developed into the ARPANET, and by 1981 the number of hosts was 213, with a new host being added approximately every 20 days (Hafner 1998).
Requests For Comments
The ARPANET was to become the technical bed of what would eventually be the Internet. Its technical development was based upon a loose consensus among developers. This consensus was, and still is, put forward and discussed in a system of documents known as RFCs (Request for Comments). The focus of the RFCs is on proposing and distributing Internet protocols and systems.
Early International Networking
Early international inter-networking was slow in developing, with Europeans focusing on developing the X.25 packet-switched networks. However, there was an international ARPANET collaboration in 1972 with the Norwegian Seismic Array (NORSAR), in 1973 with the Swedish Tanum Earth Station (via satellite links), and with University College London.
The International Telecommunication Union (ITU) further developed packet switching network standards and incorporated them in the X.25 and related standards. By 1974, X.25 formed the basis for the SERCnet network between British academic and research sites (later to become JANET). In 1976 the ITU Standard on X.25, based on the concept of virtual circuits, was approved.
The British Post Office, Western Union International, and Tymnet collaborated to create the first international packet-switched network, referred to as the International Packet-Switched Service (IPSS) in 1978. This network grew beyond Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.
A major difference between X.25 and the ARPANET was that X.25 was open to commercial use. X.25 would be used in the first dial-in public-access networks (e.g., Compuserve and Tymnet). In 1979 CompuServe became the first to offer electronic mail capabilities and technical support to personal-computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. There were also the America Online (AOL) and Prodigy dial-in networks and many bulletin-board system (BBS) networks such as FidoNet.
Unifying Networks
With the growth and development of many networks, a system was needed to unify them. The problems created by the many differences between the diverse network protocols were overcome by 1973 with the development of a common inter-network protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible (Leiner et al. 2003).
The advantage of the inter-network protocol was that the role of the network was reduced and it became easier to join most networks together via this protocol. The TCP/ IP protocol, a development of the 1974 TCP (Transmission-Control Protocol), was introduced by 1978. Three years later the associated standards were published as RFCs 791, 792, and 793, and adopted for use. On January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP (Network-Control Protocol; Postel 1981).
The first use of the term “Internet” was in the first TCP-related RFC (Cerf et al. 1974). The term soon came into more general use, with “an internet” meaning any network using TCP/IP. “The Internet” came to mean a global and large network using TCP/IP.
Regulating The Internet
By creating an inter-networking system to enable collaboration across networks, the technological choices were based upon a model that transports data irrespective of the content of that data. This model ensured the rapid growth of the Internet since new networks could be connected to the Internet through the TCP/IP protocol.
The technological decisions made to facilitate inter-networking and the resulting rapid growth of the Internet created a digital environment with large potential for open and free forms of communication. This created an illusion that the Internet and the software applications built a virtual space free from conventional rules and regulations (Johnson & Post 1996).
The Internet was seen as being a technological creation that, through a deterministic evolutionary process, created an environment of liberalism. Early users experienced this freedom through the lack of regulation and primarily the lack of governmental intervention. However, this lack of governmental intervention was to be short lived, and soon regulations would threaten the perception of freedom online. The early users attempted to protest governmental involvement; the most famous of these attempts was the writing of A declaration of the independence of cyberspace (Barlow 1996).
The openness and freedom of the Internet was not deterministic, however. The lack of regulation was the result of the legal complexity created by the novelty of an open global communications network and the legal questions it raised. Reflection on its history suggests that the Internet should not be seen as a deterministically defined technology moving toward ever-greater freedom, but as an example of autonomous technology (Winner 1978). This is a situation in which a technology has been built upon several small decisions, each driven by human choice, giving the overall impression of a technologically determined development.
The Internet is not an inherently open, democratic technology. Technology itself is neutral and therefore can be used both for democratic and nondemocratic purposes. The desire to equate communication technology with democracy is not unique to the Internet (Winner 2005). Many communication technologies have been celebrated as being democratic but it is important to remember that communication alone is an insufficient basis for a democracy (Winner 1986).
The discourse on Internet regulation has early roots in the technology though most researchers point to the seminal article by Johnson and Post (1996) as the start of the wider discussion on the regulation of the Internet. The issues that lie at the heart of this discussion are the ability of the state to regulate and the ability of users to circumvent regulation. This conflict raises a host of other questions such as the legitimacy of state regulation, the effects of regulation on users unwilling to be regulated, and the ability and cost of enforcing regulation that users can circumvent.
The technology brought into question one of the traditional pragmatic approaches to regulatory theory, i.e., the concept of command and control regulation, which can best be described as a system of statutory rules backed by sanctions (Black 1997). The fundamental idea is to create regulation that is “compliance oriented” (Baldwin 1995) with rules designed in such a manner as to promote the ease with which they can be adhered to.
The technology created an environment where the law, as expressed in the law book, may not be the most effective means of regulation (Lessig 1999). The Internet is susceptible to control by other means and the greatest threat is the control of the computer code that constitutes the environment. By controlling the fundamental infrastructure that creates the digital environments, i.e., the software or code, the regulator can ensure compliance to regulation.
The freedom experienced by the early Internet users has been replaced by a growing amount of regulation. Regulatory bodies have enacted regulations to cope with a diverse set of problems such as pornography, hate speech, computer viruses, and gambling. Yet with each regulatory step, the freedom and openness of the Internet are diminished slightly and many argue that aspects of the advantages of this communication technology are diminished.
The future role of the Internet as an open communication infrastructure is not in question. It is clear that this technology is being used for social interaction on a wide scale and that it will not cease to be even if the regulation of its use seriously curtails certain activities. Whether regulatory institutions will embrace the technology as an important source of interaction and development or whether the technology will become overregulated is an open question. Embracing free and open communication technologies requires that the regulators become more tolerant of the technology and permit wide-scale use. Over-regulating the technology entails more limited use but fails to recognize the full potential of the Internet as an important agent of free and open communication.
References:
- Baldwin, R. (1995). Rules and government. Oxford: Oxford University Press.
- Baran, P. (2002). The beginnings of packet switching: Some underlying concepts. IEEE Communications Magazine (July), 42 – 48.
- Barlow, J. P. (1996). A declaration of the independence of cyberspace. Self-published manifesto, Electronic Frontier Foundation.
- Berners-Lee, T. (1998). Frequently asked questions. At www.w3.org /People/Berners-Lee/FAQ.html, accessed August 21, 2007.
- Black, J. (1997). Rules and regulators. Oxford: Oxford University Press.
- Castells, M. (1996). The information age: Economy, society, and culture, vol. 1: The rise of the network society. Oxford: Blackwell.
- Castells, M. (2001). The Internet galaxy: Reflections on the Internet, business, and society. Oxford: Oxford University Press.
- Cerf, V., Dalal, Y., & Sunshine, C. (1974). Specification of Internet transmission control program. RFC 675.
- Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. Information Society, 8, 297–309.
- Dahlbom, B., & Janlert, S. (1996). Computer future. Göteborg: Department of Informatics, Göteborg University.
- Hafner, K. (1998). Where wizards stay up late: The origins of the Internet. New York: Simon and Schuster.
- Hanseth, O. (1996). Information technology as infrastructure. Doctoral dsissertation. Göteborg: Department of Informatics, Göteborg University.
- Johnson, D. R., & Post, D. G. (1996). Law and borders: The rise of law in cyberspace. Stanford Law Review, 48, 1367.
- Kling, R., Rosenbaum, H., & Sawyer, S. (2005). Understanding and communicating social informatics. Medford, NJ: Information Today Inc.
- Leiner, B. M., Cerf, V. G., Clark, D. D., et al. (2003). A brief history of the Internet, version 3.32. At www.isoc.org /internet/history/brief.shtml, accessed August 21, 2007.
- Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.
- Licklider, J. C. R. (1960). Man–computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1, 4 –11.
- Postel, J. (1981). NCP/TCP Transition Plan. RFC 801.
- Saltzer, J. H., Reed, D. P., & Clark, D. D. (1984). End-to-end arguments in system design. ACM Transactions on Computer Systems, 2(4), 277–288.
- Winner, L. (1978). Autonomous technology: Technics-out-of-control as a theme in political thought. Cambridge, MA: MIT Press.
- Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. Chicago, IL: University of Chicago Press.
- Winner, L. (2005). Technological euphoria and contemporary citizenship. Techné, 9(1), 124 –133.