The domain name system (DNS) was developed mainly by Paul Mockapetris and John Postel in the early 1980s. The main specifications are laid down in RFCs, or requests for comment, managed by the Internet Engineering Task Force (IETF). These are the accepted technical standards for the Internet; key ones are RFC 1034, RFC 1035, and RFC 1591 of March 1994, which defined the DNS structure and delegation.
From Numbers To Names
Until 1983 the communication among computers within and via networks was organized by numbers, so-called IP addresses. The IPv4 address had two blocks of numbers, each divided into two strings of numbers. The first block identified the network, the second one the computer. Such a system was user-unfriendly as IP addresses had more numbers than telephone or fax numbers, and users had to remember 15 or more figures. Mockapetris and Postel wanted to give Internet communication via computers a human face. The idea was to link the numeric IP address to a real name so that end-users could have an identity via a domain name. The system, which became the DNS, follows the three-layer system of names where people have first, second, and family names. The DNS is organized on the basis of a hierarchy with a top level domain (TLD, the family name) at the top.
gTLDs And ccTLDs
Postel introduced six generic TLDs (gTLD) with three characters each: .edu (educational institutions), .gov (governmental entities), and .mil (military) for use in the US only; and .com (commercial institutions), .net (network), and .org (organization) for the rest of the world. .arpa was reserved for the management of the zone files and later .int was added for intergovernmental organizations. The three global gTLDs were managed by Network Solutions Inc. (NSI), which also managed .edu. The US government took over the management of .gov, and the US Department of Defense the management of .mil.
As people asked for TLDs for other countries, Postel introduced a second category, the country code TLDs (ccTLDs). The problem was to specify what counts as a country, and in RFC 1591, Postel argued that “IANA is not in the business of deciding what is and what is not a country” (Postel 1994). Postel used a list of countries and territories managed by the International Organization for Standardization. The list included two-letter codes for 243 countries and territories.
The data zone files for the TLDs were hosted by a system with 13 root servers. For TLD management, Postel – working at this time for the Information Science Institute (ISI) at the University of Southern California – created the Internet Assigned Numbers Authority (IANA) following a recommendation by the US Department of Commerce, which had oversight of the financing of Internet research via the National Science Foundation. ISI and the Department of Commerce entered into a contract whereby IANA was given responsibility for the management of the DNS and the IP address space. The Department of Commerce reserved the right to authorize the publication of TLD root zone files in the A root server, managed by the National Science Foundation via the Department of Commerce.
Postel used personal contacts and recommendations from friends to identify TLD managers with the capacity to manage a name server and with a sense of responsibility towards the global Internet community. In the early days, delegation was mainly by a handshake between Postel, who maintained this until his death in 1998, and the designated manager, and neither government legislation nor political or legal processes were involved. Following his death, the Internet Corporation for Assigned Names and Numbers (ICANN) was established. ICANN coordinates the DNS, including the delegation and re-delegation of TLDs, under oversight by the Department of Commerce.
Each manager of a TLD has full responsibility for the second level domains (SLDs). Registries can independently register SLDs, keeping all registered SLDs in their own databases. Registered SLDs can register more names at the third level, which leads to multilayered domain names. This decentralized and distributed system makes the DNS very robust and efficient.
The Commercial Value Of Domain Names
The system has worked efficiently since its introduction. Postel had estimated that the 250+ TLDs offered enough space for global communication, but after the invention of the world wide web and the dot-com boom of the late 1990s, domain names not only created a personal identity in cyberspace, they also acquired a commercial value and came to be treated as commodities or assets and as part of a proprietary system linked to trademark names. Companies like Amazon, Yahoo! and Google built their empire on domain names.
Domain names were easy to understand, find, and remember, and they became the subject of a growing domain name marketplace. In 1999 domain names like business.com or flowers.com were being sold for US$1 million or more, and by 2006 the secondary DNS market was worth approximately US$1 billion. Highly valued names, such as those in the .com domain, have become a scarce resource and there has been pressure to introduce more TLDs, especially generic names. Theoretically, there is no technical reason to limit the number of TLDs to 250+. Given that a name server under the .com domain managed by VeriSign (until 2000 by NSI) can handle more than 60 million SLD data files, there is no reason why root servers could not manage thousands of TLD data files.
The Search For A Global DNS Management System
By the mid-1990s Postel wanted to introduce 150 new gTLDs and sought to use the newly established Internet Society (ISOC) as an institutional home for the DNS management. NSI opposed this, fearing the loss of its monopoly in the emerging domain name market. Later Postel began talks with the International Telecommunication Union (ITU) and in 1996 he initiated the Interim Ad Hoc Committee (IAHC), with IANA, IAB (Internet Architecture Board), ISOC, INTA (International Trademark Association), WIPO, and ITU as members. The six organizations negotiated a memorandum of understanding (gTLD-MoU), signed in Geneva in May 1997. The plan was to establish a policy oversight committee (POC) as a public–private partnership. The POC was to oversee the DNS and, as a first step, to introduce seven new gTLDs. To demonopolize the registration of domain names in the gTLDs space, a group of some 35 registrars was to be licensed and to join a council of registrars (CORE). Another proposal was to move the A root server from the US to Geneva, giving the ITU a role as the repository of the MoU.
The plan was heavily criticized by the US government and by NSI. In a letter to the ITU the US Secretary of State, Madeleine Albright, rejected the MoU. She argued that the ITU Secretary General had not consulted with member states before signing the MoU and that ITU was therefore not authorized to join the gTLD-MoU. Recognizing that the contracts between the Department of Commerce with IANA and NSI were terminating in October 1998, the US government proposed an alternative strategy for the privatization of the DNS.
Towards The Establishment Of ICANN
In January 1998 the Department of Commerce published a Green Paper proposing the creation of a new corporation for Internet names and numbers. This paper became the subject of controversial international discussion. The European Commission and the governments of Canada and Australia rejected the proposal as being unbalanced, arguing that the Internet was a public resource for global communication and needed international oversight. In June 1998 a White Paper proposed the establishment of NewCo, intended to uphold the principles of security, competition, and bottom-up policy development, and global representation. This paved the way for the establishment of ICANN in October 1998. A week before the launch of ICANN, the ITU Plenipotentiary Conference in Minneapolis recognized the principle of private sector leadership for Internet governance and for the management of the DNS.
ICANN entered into a MoU with the Department of Commerce and received the mandate to coordinate the root server system, the IP address system, and the DNS. The plan was that ICANN would become independent of the US government in 2001, but this did not happen. Instead, the MoU was extended and then transformed in 2006 into a joint project agreement (JPA), due to terminate in October 2009.
After its creation, ICANN faced the dual challenge of establishing a policy for handling domain name conflicts over trademark protection with respect to the gTLD domain names space, and of introducing new gTLDs. In 2000 ICANN adopted a universal dispute resolution policy (UDRP) that established a global system for dispute resolution for domain name conflicts in the gTLD domain name space. The UDRP recognizes the role of trademark-protected names in the DNS, defines registration of such domain names in bad faith, and offers a privately organized global online dispute resolution system.
In early 2000 ICANN started a process for the introduction of new gTLDs and in a pilot phase licensed seven new TLDs (.info, .biz, .name, .coop, .aero, .museum, .pro). In 2005 it licensed sponsored TLDs (sTLDs) .asia, .mobi, .cat, .jobs, .tel, and .travel, but rejected other proposals, including one for .xxx. Its generic supporting organization (GNSO) is developing a more stable gTLD policy. The plan is to create a clear legal procedure that would define political, economic and technical criteria for new gTLDs that have to be met by applicants, and it is expected that the ICANN board will adopt the new policy in early 2008, thereby making it possible for a substantial number of new gTLDs to be introduced.
DNS And The World Summit On The Information Society (WSIS)
During the WSIS that started in 2002, the DNS emerged as part of the controversial debate on Internet governance. A number of UN member states, notably China, India, Brazil, South Africa, Saudi Arabia and others, called for a new governmental body to oversee the Internet core resources, including the DNS. The continuing role of the US government in authorization and publication of TLD zone files in the Internet root (not the so-called “hidden server” operated by VeriSign) was criticized. The issue was discussed in the UN Working Group of Internet Governance (WGIG), which was established by UN Secretary General Kofi Annan after the first phase of the WSIS in 2003. During the second WSIS phase in 2005, governments recognized the role of ICANN but agreed on a number of general political principles. The Tunis Summit created a new Internet Governance Forum (IGF) for discussion of Internet-related issues and a process of enhanced cooperation involving a consultation process was launched to clarify the future oversight of the Internet’s core resources, including the DNS. It is expected that in 2010, when the JPA between ICANN and the Department of Commerce will have terminated and the mandate of the IGF is also due to end, the issue will come back onto the agenda of global Internet policy debate.
The development of the DNS has been analyzed by a number of researchers both from the technical and the social science points of view. A key analysis was published by Lessig (1999) on the theme of Code and other laws of cyberspace, arguing that great care should be taken to develop an understanding of the interaction between the code embedded through software development and its effect – and potential constraints – on the way the Internet is used. Mueller’s (2002) Ruling the root presented a detailed analysis of the politics and institutional processes influencing the decisions of ICANN. Various authors have provided further reflections on the tensions inherent in the need to address technical and policy issues on a global scale, including Drake’s (2005) Reforming Internet governance, which offered a critical analysis of the work of the WGIG.
There have been studies over the past decade highlighting the complex environment in which Internet governance encompassing issues raised by domain names must operate. Cairncross (1997) highlighted the “death of distance,” while others such as Castells (2001) have investigated the network environment from sociological perspectives. Legal issues have been critically reviewed by Drucker and Gumpert (1999) and, with respect to intellectual property considerations, by Lessig (2001). Others, including Loader (1997), have tackled governance issues from different disciplinary perspectives. There is a continuing need for interdisciplinary studies, especially to understand how the technical dynamics of domain names and root servers are played out on the global stage.
- Cairncross, F. (1997). The death of distance: How the communication revolution is changing our lives. Boston, MA: Harvard Business School Press.
- Castells, M. (2001). The Internet galaxy: Reflections on the Internet, business, and society. Oxford: Oxford University Press.
- Drake, W. (ed.) (2005). Reforming Internet governance. New York: UN Publishing House, UN ICT Task Force Series no. 12.
- Drucker, S. J., & Gumpert, G. (1999). Real law and virtual space: Regulation in cyberspace. Cresskill, NJ: Hampton Press.
- Kleinwächter, W. (2001). Governance in the information age. University of Aarhus Press.
- Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.
- Lessig, L. (2001). The future of ideas: The fate of the commons in a connected world. New York: Vintage Books.
- Loader, B. D. (ed.) (1997). The governance of cyberspace. London: Routledge.
- Mueller, M. (2002). Ruling the root: Internet governance and the taming of cyberspace. Cambridge, MA: MIT Press.
- Postel, J. (1994). RFC 1591: Domain name system structure and delegation. At www.isi.edu/innotes/rfc1591.txt, accessed August 7, 2007.