Readership research employs empirical methods to investigate print media usage, focusing mainly on magazines and newspapers that appear periodically. Of primary importance in this context are readership analyses that ascertain findings on print-media coverage (reach or cumulative audience) and readership structure (composition of readership to describe print-media target groups). These methods are supplemented by reception analyses, which investigate reading habits in a more general sense.
A distinction can be drawn here between readership research as media advertising research, which deals with the performance of a print medium as an advertising medium, and as editorial readership research, which is aimed at optimizing newspaper and magazine content and/or layout. The lion’s share of applied readership research focuses on optimizing media planning for advertising purposes as well as optimizing the content and layout of print products. In comparison, academic reception research, which is designed to ascertain more fundamental insights into readership, plays a relatively minor role. The dominance of media advertising research in practice can be explained by its far-reaching economic significance.
The origins of systematic readership research based on empirical methods go back a very long way. In the US, for example, standardized questionnaires were already being used in the early 1920s to conduct commercially motivated mass surveys among newspaper readers (to help publications respond to readers’ interests and thus increase circulation, to provide evidence of readers’ interest levels and purchasing power, and thus increase advertising revenues). In 1921 George Hotchkiss conducted a written survey among wealthy and well-educated New York residents to investigate their newspaper-reading habits. His self-administered survey method was further developed by Ralph O. Nafziger and his students at the University of Wisconsin. During the same period, George Gallup was working on a different approach at the University of Iowa. Gallup developed the face-to-face survey method based on interviews with a representative cross-section of readers, a method he first employed in 1927 to conduct copy tests to establish whether particular articles were read in full, merely glanced through or paid no attention at all.
The widening spectrum of print media available, for example the ever-increasing number of new general-interest and special-interest magazine titles, has brought with it an increasing necessity for comparative readership coverage and readership-structure findings upon which commercial advertisers and advertising agencies can base their media-planning decisions. Investigations funded jointly by several publishers have in the meantime been superseded by syndicated national readership surveys in many countries. If readership surveys are to obtain accurate findings, above all for titles with a small circulation and low coverage, they require very large samples, which also means very large research budgets. The need to raise substantial funding led to joint industry surveys commissioned by a great number of publishing houses and conducted by large market research companies. These surveys are often overseen by technical advisory commissions that serve to lend the findings a legitimate, “quasi-official” status, thus raising general acceptance levels. Typical examples of such joint industry readership surveys include: the MRI in the US, the NRS survey in the UK, the media analyses (MA) in Germany, or the AMPS in South Africa.
In his “Summary of current readership research” for the Worldwide Readership Symposium 2005 in Prague, Erhard Meier reported on 91 readership surveys from 71 countries, a fact that reflects both the centralization and the globalization of research companies today. Multinational organizations such as Ipsos, AC Nielsen, Milward Brown, or TNS each presently conduct national readership surveys in 10 or more different countries. Sample sizes range from 1000 (Bahrain and Zambia) to about 250,000 respondents (India).
Basic Definitions And Methods
Newspaper or magazine circulation says little in itself about the number of people a print medium actually reaches. The reality of the situation is far more complex: many copies are printed but distributed free of charge rather than sold or are not distributed at all, other copies are sold but are not read and, above all, a great number of copies are used by more than just one person, e.g., at home, at work, or in a doctor’s waiting room. Despite the fact that data supplied by publishing houses is verified by so-called Audit Bureaus of Circulation in many countries to prevent manipulation, the question of how many readers a print medium actually reaches remains unanswered.
What Is A “Reader”?
Because circulation data fails to paint the complete picture, it is necessary to conduct readership surveys based on broad samples. In order to do this, it is necessary to define exactly who qualifies as a reader. Should only people who read a publication from cover to cover very carefully be counted or do people who just flick through a publication, stopping only to read a few headlines or glance at a few pictures also qualify as readers? What if a publication is read more than once: do all reading events count? Does it only count if a publication is read on one specific day or does it count if it was read at any time within the issue period or even well beyond the issue period, e.g., at some time in the last three months? Clear definitions of who counts as a reader are essential as they provide the buyers (advertisers and advertising agencies) as well as the sellers (publishing houses) of advertising space with what equates to a mutually acceptable currency.
The most common of such currencies is Average Issue Readership (AIR), i.e., the number of people who have read or looked at some or all of the average issue of a publication. Just flicking through is enough: it is not necessary to have read a publication through carefully to have come into contact with an advertisement placed there. Average Readers per Copy (ARPC) can be calculated by dividing average-issue readership by average circulation. There are many different ways of measuring average-issue readership. The four main techniques are described here, namely: “Through-the-Book,” “Recent Reading,” “First Reading Yesterday,” and “Readership Diaries.”
The oldest of these methods is the Through-the-Book (TTB) technique, which was first used in 1936 to estimate the readership of Life Magazine. It is the only one of the four approaches described here to employ recognition of a specific issue to estimate readership. Respondents are shown actual full issues of publications; stripped or skeletonized issues are often used to reduce the burden on both respondents and interviewers when a great number of publications are presented in one interview. Bill Simmons and Alfred Politz are pioneers of the TTB approach. Empirical tests have shown that TTB estimates are prone to both overclaims (e.g., caused by perceived social desirability or prestige effects) and underclaims (e.g., caused by forgetfulness), as well as to frequent confusion of similar print titles with the same kind of content. To prevent confusion, similar titles are often presented together in a group. The age of the issues presented also has a major influence on findings. If the issue is too old, there is a danger that respondents will have forgotten about having read it, whereas if the issue is very recent it may not yet have accumulated its full audience.
The most common method for measuring readership today is the Recent Reading (RR) technique, which was brought into use by the Institute of Practitioners in Advertising in 1952. In Europe, it is also known as the IPA technique. There are two fundamental differences between RR and TTB. First, respondents are not asked whether they came into contact with one specific issue of a newspaper or magazine, but about contact with any issue. “The readership estimate depends not on the respondent’s ability to recognize a specific issue as one they have previously read, but on their accurate recall of when it was that they last came into contact with the publication in question” (Brown 1999, 65). Second, the question mainly used to estimate readership in this technique is: “When did you last read or look at any copy of . . . (title),” and it is posed either as an open question or with response categories, e.g., “yesterday”/“within the last seven days”/“between one week and one month ago,” and so on. Even in this case, where the information requested seems to be so simple, there is still a danger that respondents may not provide reliable answers from memory. In particular, if the most recent reading event is already quite some time ago, a telescoping effect is often observed, i.e., respondents believe something happened more recently than it actually did.
Intensive methodology research has uncovered two further phenomena limiting the accuracy of the RR technique. The first is replicated readership, which comes about when people spread their readership of a given issue over more than one issue period, leading to overestimation. The second is parallel readership, which occurs when two or more issues of the same publication are read in the same period, leading to an underestimation of average-issue readership. Although these two sources of error cancel each other out to a certain extent, empirical tests indicate that the net effect tends to be understatement.
Many readership surveys supplement the main recent-reading question by asking about frequency of reading. A distinction can be drawn here between questions using verbal scales (e.g., “almost always,” “quite often,” or “only occasionally”), questions using numerical scales (e.g., “How many out of twelve issues of magazine XYZ have you read in the past 12 months?”), and questions that employ a combination of verbal and numerical scales. However, these frequency questions also depend on an ability to remember events to such a degree of accuracy that many respondents are likely to struggle to respond reliably. In order to increase recall accuracy in particular, and to eliminate biases caused by replicated reading, readership researchers have experimented with ways to reduce the recall period, e.g., by including questions about First Reading Yesterday (FRY).
A further method referred to earlier for estimating readership data is Readership Diaries. Households selected at random are asked to regularly record all the newspapers and magazines they read during a specified period of time (e.g., 1 month) in a diary, including additional details on which issue was read, how much was read, whether the issue in question was being read for the first time, etc. Researchers face serious methodological problems when using the readership diaries method; recruiting a representative cross-section of panel members, for instance, and ensuring they continue to participate reliably over time can prove extremely difficult. A further drawback of diaries is the tendency of respondents to behave unnaturally (conditioning effect).
All of the methods employed to estimate print-media readership described so far suffer from one common weakness, in that they all rely exclusively on statements provided by respondents, who may fail to recall past events correctly or filter responses according to social norms, e.g., social desirability. The accuracy of readership estimates ascertained using these methods therefore depends on the number and type of visual aids employed or the length of time that has elapsed since the reading event took place. Alternative approaches to ascertaining readership data involve technical measurement techniques designed to gather data independently of replies provided by respondents (“measurements not responses”). Examples of this type of approach are: the use of eye cameras to track reading, electromagnetic sensors fitted to wristwatches to register page contacts, portable bar-code scanners to register publication details, or hidden cameras to validate page-contact findings. However, these methods have so far not progressed beyond laboratory tests conducted under unrealistic conditions.
Erhard Meier’s review of methodological observations referred to above shows that most national readership surveys continue to be based on face-to-face interviews with broad samples of the population. Of the 91 national readership studies conducted around the world during the review period, 71 used face-to-face interviews, of which the majority (63) were conducted with pen and paper, 5 as computer-assisted personal interviews (CAPI), and 3 as double screen CAPI interviews. A total of 10 national studies used self-completion questionnaires and 7 used computer-assisted telephone interviews (CATI). There are a few countries where a mix of methods is used, e.g., telephone interviews combined with self-completion (Norway), face-to-face interviews with pen and paper supplemented by subsamples conducted using CATI, or (in the Netherlands and Germany) subsamples using computer-assisted self-interviews (CASI).
Because large national readership surveys often include several hundred different newspaper and magazine titles, it is common practice to pose screening questions at the beginning of interviews to reduce the title load per respondent by excluding those titles respondents “only know by name,” or titles that are completely “unknown” to them. Subsequent questions, usually beginning with the frequency question followed by the recency question, are only posed on the remaining publications. Logo cards, either in black and white or in color, are the main form of visual aid employed in face-to-face readership interviews to assist respondents’ recall, as well as to help avoid title confusion. Print titles and categories (dailies, weeklies, fortnightlies, monthlies, etc.) are often rotated to counterbalance possible order effects. In many readership surveys, respondents are then posed a number of supplementary questions to establish, for example, how long, where, and on how many days reading took place, or to ascertain their “relationship” with the publication read, e.g., by asking about the amount read (“all/nearly all,” “over half,” “about half,” “less than half,” etc.). In order to establish reader “involvement,” respondents are also often asked to gauge how much they would “miss” a publication if it were no longer published.
Another way of reducing load per respondent employed in certain large national readership surveys is to split the titles included, e.g., to create two groups of 150 publications and conduct a survey for each. The findings from the two representative surveys are fused to create one combined dataset. This so-called “marriage of data” via common connecting links (e.g., socio-demographic or attitude variables) is, however, wide open to methodological criticism.
Rudimentary contact data (having “read or flicked through” a newspaper or magazine) tells advertisers little about the potential effectiveness of their adverts. In contrast, data on the quality of reading, allows conclusions to be drawn about the chances readers have of coming into contact with advertisements, which is a prerequisite for advertising effectiveness. Respondents who have read a publication carefully from cover to cover, for example, are more likely to come into contact with a particular advertisement than those who have only taken a look inside, flicked through briefly.
In summing up his comprehensive review of methods developed in the field of readership research, Michael Brown states that there is no “gold standard,” no single, inalienable currency that provides an equally valid measure of readership across the entire spectrum of print titles and categories: “All methods have differing advantages and limitations. Arguments as to methods’ ‘validity,’ in any absolute sense, are arid; they should be judged by their ability to deliver readership estimates which allow comparisons between different newspapers and magazines which are minimally biased” (Brown 1999, 83). For as long as there are no universally accepted methods, it will continue to prove difficult to further harmonize the many different techniques being employed to estimate readership around the world today, as well as to establish common methodological standards. This is essential, however, for international advertising planning in an increasingly globalized world economy.
Editorial Readership Research
While newspaper and magazine readership in newly developed countries is rising in line with alphabetization, it is in decline in many developed industrial countries. In the US, the fall in newspaper readership in recent decades is so dramatic that it represents a cultural shift away from newspaper reading. Fewer and fewer young people are becoming regular newspaper readers and many magazines are, to an increasing degree, only being read sporadically. Young people in particular read print media more impatiently and selectively, tending to scan rather than read thoroughly.
Empirical readership research has an important role to play in optimizing print media, providing vital insights for decision-makers. Surveys among readers based on the copy test technique, for example, can be used to establish which of the items in an issue read yesterday or the day before yesterday were “read in full,” “only scanned,” or “not looked at at all.” Here too, attempts have been made recently to introduce technical measurement to avoid relying on readers’ questionable ability to respond reliably from memory. One example is the “Reader Scan” method, whereby a small panel of readers electronically mark the point up to which an article has been read during the act of reading. However, along with the apparent gains in accuracy, this method also brings with it a number of problems. For example, there are difficulties with gathering representative samples for such studies as it is normally only possible to attract participants with above-average motivation as readers. It is also impossible to completely counter conditioning effects in such tests (“unnatural behavior”).
The increasing inter-media competition has breathed new life into readership research in recent years and new perspectives are opening up, for example research into the networking of cross-media usage between print and online media (“print in a multimedia world,” “integrated communication”).
- Belson, W. (1962). Studies in readership. London: Business publications on behalf of the Institute of Practitioners in Advertising.
- Brown, M. (1999). Effective print media measurement: Audiences . . . and more. Harrow: Ipsos-RSL.
- Joyce, T. (1987). A comparison of recent reading and full through-the-book. In H. Henry (ed.), Readership research: Theory and practice: Proceedings of the third international symposium, Salzburg. Amsterdam: Elsevier, pp. 116–122.
- List, D. (2006). Measuring audiences to printed publications. At www.audiencedialogue.org, accessed December 2006.
- Lysaker, R. (1989). Towards a gold standard. In H. Henry (ed.), Readership research: Theory and practice: Proceedings of the fourth international symposium, Barcelona. London: Research Services and British Market Research Bureau, pp. 172–179.
- Meier, E. (2005). Looking for best practice. Session papers from the Worldwide Readership Research Symposium (Prague). Harrow: Ipsos, pp. 1–9.
- Politz, A. (1967). Media studies: An experimental study comparing magazine audiences by two questioning procedures. New York: Alfred Politz.
- Schreiber, R., & Schiller, C. (1984). Electro-mechanical devices for recording readership: A report of a development project. In H. Henry (ed.), Readership research: Proceedings of the second international symposium, Montreal. Amsterdam: Elsevier, pp. 198–199.
- Tennstädt, F., & Hansen, J. (1982). Validating the recency and through-the-book techniques. In H. Henry (ed.), Readership research: Theory and practice: Proceedings of the first international symposium, New Orleans. London: Sigmatext, pp. 229–241.