The term “public opinion polling” generally refers collectively to both the representative survey method and to the institutes that specialize in employing this method, particularly to commercial survey institutes. Other terms commonly employed in this context are: “public opinion research,” “survey research”, or simply, if somewhat confusingly, “public opinion.” The term “demoscopy” (Greek: “observation of the public”), originally suggested by American scientist Stuart Dodd (Dodd 1946), is also commonly used in some European countries, particularly in connection with political debate, although it has not gained a foothold in English-speaking countries.
Along with media content analysis, the laboratory experiment, and participant observation, public opinion polling is one of the most important tools in empirical communication research. Survey research serves as a vital source of information in the social sciences, as well as in the areas of market research, media research, and the political sphere.
The three cornerstones of survey research are: the standardization of the investigative technique, i.e., completing interviews using a firmly worded questionnaire; analyzing the findings in aggregate, in other words, observing respondents as a group, not as individuals; and the random selection of respondents to form a group representative of the total population in question. These three core elements of opinion research were first combined systematically in the early twentieth century, although they had already been employed previously in a number of statistical surveys and research projects.
Starting in the early Middle Ages at the latest, there were a number of attempts to investigate the population’s opinions on current issues by conducting standardized surveys of a great number of people. The first document that can be viewed, at least in terms of its approach, as a standardized questionnaire designed to ascertain opinion, stems from the year 811. It is a list of questions compiled by Emperor Charlemagne to be answered by local dignitaries in the provinces of his realm. The questions were designed to investigate the reasons for symptoms of unrest in the empire at that time, for instance, the growing number of men deserting from the army (Petersen et al. 2004). Beginning in the late eighteenth century, we find a steady series of developments in survey method, culminating during the nineteenth and early twentieth centuries in a rich tradition of empirical social research based on surveys. A number of remarkable studies were completed from the mid-nineteenth century onwards, especially in Germany (cf. Oberschall 1965).
Attempts to apply statistical principles to people date back even further. The Old Testament describes a census conducted by King David (2 Samuel 24); and statistical data on the population was collected regularly in the Roman empire, for example by the census documented in the nativity story in the Gospel according to Luke (Luke 2:1–3). During the eighteenth century, the so-called “moral statisticians” addressed the question of why the number of suicides, crimes, births, and other seemingly arbitrary acts remained constant from year to year. Gradually, they realized that even acts resulting from highly individual motives adhere to calculable statistical laws when viewed in terms of society as a whole.
The development that ultimately revolutionized survey research and, consequently, broad swaths of the social sciences was the concept of selecting respondents according to the principle of random statistics. The first survey of this kind was completed by British economist Arthur Bowley in 1912 (Bowley 1915). The breakthrough of the modern method of social research based on representative samples, however, came when American researchers George Gallup, Elmo Roper, and Archibald Crossley employed this technique for their forecasts of the 1936 US presidential elections.
Aside from choosing respondents using a technique that was essentially based on the principle of random selection, another novel aspect of their investigation was the use of interviewers to question respondents face-to-face. Prior to this study, questionnaires were commonly sent out by mail. By using the face-to-face technique, Gallup, Roper, and Crossley insured that the sample was representative, and, moreover, they also found that when contacted personally by interviewers, a substantial share of the randomly selected respondents were actually willing to participate in the survey. Of course, the methods employed then have been refined in many ways in the meantime, but the fundamental methodological principles used are still applied in all reputable representative surveys today.
Fundamental Methodological Principles
Although representative survey findings are now a standard feature in newspaper and television reporting, the survey method still seems somewhat puzzling to many, who wonder how it is possible to draw firm conclusions about the opinions of a population of millions based on interviews with just one or two thousand people. These doubts would be justified if the goal of survey research were to ascertain the opinions and modes of behavior of each single person in all their complexity, but surveys are not intended for that kind of individual case study. When it comes to opinion polling, individual members of society are not the object of investigation, but rather society as a whole.
In completing surveys, strict rules regarding standardization and structuring must be adhered to: as far as possible, all respondents are to be treated in the same way, regardless of whether they are university professors or unskilled workers. All respondents are posed the same questions, using identical question wordings and response alternatives. This technique provides no information about the special characteristics and motives of individual respondents. Rather, it enables researchers to determine what respondents think about a certain issue on average (cf. Noelle-Neumann & Petersen 2005, 65–79).
In public opinion polls the respondents are not selected arbitrarily, but in accordance with strict rules that insure that the group of people interviewed is representative of the total population, thus enabling researchers to generalize the responses obtained during the interview. Representative samples can be drawn using two different techniques. The “random method” adheres to the lottery principle, with samples being selected at random from the total universe. The fundamental principle is that every member of the population or group of people under investigation must have an equal chance of being included in the sample. The second technique for drawing representative samples is the “quota method.” Using this method, interviewers select respondents who display certain predetermined attributes, such as sex, age group, occupational group, size of place of residence, etc. Taken as a whole, the attributes stipulated in the interviewers’ quota instructions represent a scaled-down model of the total population (Taylor 1995).
Over the course of the last few decades, research on survey methodology has resulted in a highly diverse body of literature. Numerous basic research studies have been conducted on various aspects of the survey procedure, although the bulk of such research generally focuses on sampling and data analysis. In recent years, research on the first of these two thematic areas, sampling techniques, has mainly concentrated on the fact that response rates – i.e., the share of persons selected for a sample who can actually be contacted and who are willing to complete the interview – are steadily declining in many countries around the world. Research on analytical techniques has been boosted by the greatly increased computing capacity of modern computers, which allows even extremely complex multivariate analysis methods to be employed with relative ease. In contrast, research on how interviewers affect respondent behavior has become somewhat less important in recent years, as telephone and online surveys have largely replaced face-to-face surveys.
So far, remarkably little basic research has been completed on the subject of questionnaire methods. At first glance, this seems surprising, since the questionnaire is the survey method’s most important tool. Without a good questionnaire, even the most complex analytical methods are of no use at all. Unlike research on sampling and analytical methods, there is no solid mathematical foundation for research on questionnaire techniques. Although survey research pioneers such as Hadley Cantril (1944) and Stanley Payne (1951) investigated the effects of various question wordings via a series of split-ballot experiments in the 1940s and 1950s, it was not until the 1980s that this area began to attract more attention again. In this respect, findings and methods from the field of cognitive psychology played an important role, as reflected in the work of researchers such as George Bishop, Norman Bradburn, Seymour Sudman, and Norbert Schwarz (cf. Sudman et al. 1996). In contrast, only a few isolated studies have dealt with the issue of how various wordings affect respondents emotionally or how the questionnaire orchestration affects response behavior. Another aspect of questionnaire methodology that has been neglected thus far is the effect of illustrations, lists, cards, or other items presented to respondents during the interview. The lack of basic research on this aspect is largely attributable to the current predominance of the telephone survey method, which has depleted the array of methods that can be employed in surveys. Now, however, the emergence of online surveys has begun to revive the relevance of some of these aspects (cf. Couper et al. 2001). From another aspect, little progress has been made in adapting measurement techniques from the field of individual psychology to the requirements of survey research, despite some very promising approaches (Ring 1992).
Since the early days of survey research, numerous variations on the public opinion polling method have been developed, to suit the specific investigative task at hand. One example in this context is the controlled field experiment (“split-ballot experiment”), which can play a major role not only in investigating media or advertising effects but also in basic methodological research, since it enables researchers to combine the evidentiary logic of the experiment with the generalizable findings of representative surveys (Petersen 2002).
Another example is the panel technique, whereby the same group of respondents is interviewed on several separate occasions. This method is particularly important when analyzing effects that cannot be investigated experimentally, thus playing a significant role in both social research and market research (Hansen 1982). In recent years, there has also been a surge in various technically supported measurement techniques and in “access panels,” whereby respondents are selected from a previously recruited pool of people who are willing to participate, although such techniques play only a minor role in academic research.
Social Significance Of Opinion Polls
In the meantime, survey research has become an integral part of many areas of life. It is probably the most important research tool in the empirical social sciences, for example in communication research, political science and sociology. It plays a central role in the business world. Today, representative surveys are naturally conducted in conjunction with product launches, advertising campaigns and new design concepts of all kinds. Survey research plays a particularly vital role in the political process and the mass media, especially during election campaigns.
Right from the start, election forecasts have been particularly important for survey institutes since they allow researchers to compare the data on party strength ascertained before an election with the actual election outcome, thus providing a rare opportunity to test the reliability of the survey method against external criteria. For decades, election surveys have also been the target of critical remarks from politicians and journalists, due to the alleged influence of published election forecasts on voting behavior. Contrary to common assumptions, however, research to date indicates that the effect of such forecasts on voting behavior is actually only slight. At any rate, the influence exerted by polls on voting behavior is certainly far less significant than the effect of other forms of media reporting. Yet even if election forecasts did exert a major influence on voting behavior, it is fair to ask whether this would necessarily have a negative influence on the democratic process, as many people assume. Representative surveys are not the only source of information on the relative strength of the political parties prior to an election, but they are certainly the most reliable source. Particularly for politically astute, tactically minded voters, this information can be important, contributing to well-informed voting decisions. Without the publication of election polls, the void would simply be filled by less well-founded speculation.
- Bowley, A. (1915). Livelihood and poverty. London: Bell.
- Cantril, H. (1944). Gauging public opinion. Princeton, NJ: Princeton University Press.
- Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65, 230–253.
- Dodd, S. (1946). Toward world surveying. Public Opinion Quarterly, 10, 470–483.
- Hansen, J. (1982). Das Panel: Zur Analyse von Verhaltens- und Einstellungswandel [The use of panel surveys to analyze change in behavior and attitude]. Opladen: Westdeutscher.
- Noelle-Neumann, E., & Petersen, T. (2005). Alle, nicht jeder: Einführung in die Methoden der Demoskopie [All but not each: Introduction to the methods of survey research], 4th edn. Berlin: Springer.
- Oberschall, A. (1965). Empirical social research in Germany 1848–1914. Paris: Mouton.
- Payne, S. (1951). The art of asking questions. Princeton, NJ: Princeton University Press.
- Petersen, T. (2002). Das Feldexperiment in der Umfrageforschung [The field experiment in survey research]. Frankfurt am Main: Campus.
- Petersen, T., Sabel, P., Grube, N., & Voß, P. (2004). Der Fragebogen Karls des Großen. Ein Dokument aus der Vorgeschichte der Umfrageforschung [Charlemagne’s questionnaire: A document from the very beginnings of survey research]. Kölner Zeitschrift für Soziologie und Sozialpsychologie 56, 736–745.
- Ring, E. (1992). Signale der Gesellschaft: Psychologische Diagnostik in der Umfrageforschung [Societal signals: Psychological diagnostics in survey research]. Göttingen: Verlag für angewandte Psychologie.
- Sudman, S., Bradburn, N., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
- Taylor, H. (1995). Horses for courses: How different countries measure public opinion in very different ways. The Public Perspective (February/March), 3–7.