Internet ratings systems, such as those provided by Nielsen//NetRatings or comScore Media Metrix, measure and rank the popularity of different websites and are used extensively in setting online advertising rates. The three typical online metrics are unique visitors, page views, and reach. In theory, these are simple measures: unique visitors measures the number of visitors to a website (analogous to the “audience” of a television show), page views measures the number of times a web page has been seen, and reach represents the percentage of the audience who have visited the website during the period under question (typically monthly). In practice, each of these elements is subject to some controversy due to a lack of agreement about measuring methods and differing sources of data. In addition to the main variables of unique visitors, page views, and reach, ratings companies also report on the demographic profile (age, sex, race, location, occupation, education, income, etc.) of visitors for different websites. Ratings agencies may also report directly on the frequency and volume of advertising placed on different websites.
Rating data are of high significance for television and radio because they may determine whether shows are renewed or canceled. Online, this is rarely the case, although the use of Internet ratings data is widespread: media companies use online ratings to compare the popularity of their products to those of their competitors, and advertisers and advertising agencies use ratings to plan their advertising spending. Also, companies traditionally considered as advertisers (for example, manufacturers or retailers) use online ratings to assess their websites’ effectiveness as a direct means of customer communication. Thus, Ford may subscribe to Nielsen//NetRatings “Automotive” ratings group in order to assess its position against GM or Toyota.
Despite their widespread use, data from ratings agencies are not the only or even the primary source of information about website audiences. Every website also automatically generates a record of which pages are requested and by which computers, known as a log file. By analyzing their own log files, publishers can develop their own picture of their audiences. Website publishers therefore have three potential sources of information about the size and composition of their audiences: their own log files, com-Score numbers, and Nielsen numbers. These three figures rarely agree and, indeed, according to industry reports, are often extremely divergent. For example, the online magazine Slate reported that for the month of December 2005, Nielsen//NetRatings put its page views at 41 million, com-Score at 27 million, and its own log files at 61 million. Discrepancies like these have led to the methods of the Internet ratings agencies being questioned by publishers. A satisfactory resolution has yet to be found, with publishers, advertisers, and ratings agencies continuing to debate the issues.
Methodologically, online ratings are based on a mix of two types of data, demographic data and behavioral data. The ratings companies recruit a random sample (or panel) of volunteers. Panel members are required to fill out questionnaires about demographic data. Panel members also install software on their computers, which monitors computer usage and sends data to the ratings agency via the Internet. Each time the computer is used, the panelist selects his or her name from a list of people in the household, so that different individuals’ behavior is tracked separately. The panelist’s behavior is then extrapolated to the population as a whole by matching the demographic profile of the panel to data derived from the census. This process is very similar to that used for television ratings.
There are, however, areas where the established methodology faces challenges. First, Internet traffic is not distributed on a normal or bell curve. Instead, a very few websites have the vast majority of the traffic while nearly all websites have very little traffic – this is known as a “long tail” distribution. The consequence of the long tail is that for any one website the number of panelists who visit and use the website is likely to be very small; in fact, probably small enough to be statistically insignificant. In order to overcome this problem, comScore has begun to create nonrandom samples to assess behavior on particular websites. These nonrandom samples are then weighted to resemble a random sample, but the methodology has yet to be widely accepted.
Second, in contrast to most other media, the Internet is used as much at work and at educational establishments as at home. This has led the ratings companies to establish “at work” and “at home” panels, and has also led to new issues in trying to quantify the level of duplication between the panels, so that a single panelist accessing the Internet both from work and from home is not counted as two people.
Third, Internet usage is not typically confined to a single national context. Internet ratings agencies are therefore struggling to develop panels that will allow them to provide accurate ratings on a global scale. Typically, US panels are the most robust and there are some questions about standardization globally, with different factors being used to extrapolate to the whole population for the US as opposed to other countries, and different methods for “at work” and “at home” panels in US versus nonUS panels.
Finally, Internet ratings agencies by 2007 were not assessing traffic from platforms other than a personal computer – such as a mobile phone, gaming device, or handheld device, and this remains a methodological challenge for the industry.
While television and radio ratings have received a wide range of academic study, Internet ratings systems have yet to generate comparable discussion or critique. Many questions therefore remain to be answered. For example, how appropriate and accurate are the new methodologies with which these companies are experimenting? How much, and under what conditions, do publishers and advertisers rely on agency data versus internal sources? How much do these data influence publishing decisions? As the global online media market develops, it seems inevitable that ratings agencies will come under greater scrutiny by industry and academy alike.
- Ang, I. (1990). Desperately seeking the audience. London: Routledge.
- Boutin, P. (2006). Slate has 8 million readers, honest: Or maybe it’s 4 million. Which should you believe? At www.slate.com/id/2136936, accessed January 9, 2007.
- comScore (2006). ComScore networks methodology and technology overview. At www.comscore.com/method, accessed January 9, 2007.
- Meehan, E. (1992). Why we don’t count: the commodity audience. In P. Mellencamp (ed.), Logics of television: Essays in cultural criticism. London: BFI, pp. 117–137.
- Nielsen//NetRatings (2006). NetView Panel Recruitment/Management and Audience Enumeration/ Estimation. Nielsen//Netratings white paper.