Humans act, at least partly, on the basis of how they think others expect them to act. This means that humans have the capacity to know what others think or expect them to do. Some researchers have argued that understanding what others think is essential to social life and that successful human relationships depend on our ability to read the minds of others (Gavita 2005). How good are we, though, at knowing what others think or expect us to do? Data show that there is often a negative correlation between our perceived ability to know what others are thinking and what they are actually thinking (Davis & Kraus 1997). In other words, those who are more confident about their ability to know what others are thinking are, in fact, less accurate, compared to those who are less confident. Accuracy, however, may not be important in this context because what we choose to do usually depends on our perceptions more strongly than on objective reality.
Most communication-based campaigns, at their core, have the central mission to change people’s perceptions of reality, whether that reality pertains to something external (such as a political issue, an organization, etc.) or internal (self-concept). For example, political campaigns seek to change people’s perceptions about a particular candidate or issue, commercial campaigns strive to alter people’s attitudes toward a product, and health campaigns seek to alter people’s perceptions about their self-image, ability, or self-worth, just to name a few. In many instances, the ultimate goal of the campaign may be to change individuals’ behaviors, but changing people’s perceptions with regard to their attitudes or beliefs is thought to be an important pathway for doing so.
Systematic Misconstruing Of Perceptions Of Reality
Research shows not only that people are often inaccurate in their assessment of others, but also that people’s misperceptions follow systematic patterns (Fields & Schuman 1976). People make predictable errors when asked to assess where others stand on a particular issue, how much at risk others are to various threats, and how much others are influenced by the media. Many of these patterns have been empirically studied by scholars, and those with particular relevance to media campaigns are summarized below.
Although it has been defined in many ways, pluralistic ignorance was first articulated by Allport (1924) as the tendency of people to overestimate the public support for norms deemed to be socially desirable. In classic studies of this phenomenon in the United States in the 1970s, researchers found, for example, that, although people themselves believed in racial justice, they also perceived that their peers harbored more racist attitudes than they actually did (O’Gorman 1979). Similarly, people have been found to report greater concern for environmental causes in comparison to their estimates of how much their peers are concerned with the same issues. Pluralistic ignorance has also been found in the context of college students’ perceptions about the prevalence of alcohol consumption among their peers. Most students tend to hold exaggerated beliefs about how much alcohol their peers consume (Perkins & Berkowitz 1986), and they believe that regulations against alcohol will be resisted by others more strongly than by themselves.
The cognitive mechanism underlying the effects of pluralistic ignorance is not fully known, but researchers have asked questions about how people come to internalize the incorrect information about public perceptions. There is some evidence to indicate that the frequency with which people hear about a particular opinion – even if it comes from a single source – tends to increase perceptions about how widely the opinion is held by the larger public (Weaver et al. 2007). This seems to indicate that pluralistic ignorance may be linked with the accessibility of opinion – ones that are cognitively more accessible are thought to be held by many others. If so, it would further seem that, to the extent that an issue garners a great deal of media attention, pluralistic ignorance around that issue would be high.
The false consensus effect and pluralistic ignorance are similar in that both pertain to patterns of misperception; they are different in that the false consensus effect is the tendency to overestimate the extent to which others share or are in support of one’s own attitudes, beliefs, or behaviors (Ross et al. 1977). Furthermore, the greater the relative deviance of the behavior one engages in, the greater the magnitude of the false consensus effect. In other words, paradoxically, individuals’ tendency to misperceive the presence of consensus among their peers is stronger if the behavior in question is practiced less often (that is, less normal). The false consensus effect is also thought to be a special case of social projection, whereby people validate their beliefs by projecting their characteristics onto others (Holmes 1968). Hence, the greater the need to validate the beliefs, the greater the pressure to perceive that others holding similar beliefs are in larger numbers. When beliefs are actually held by a great many people, there would be less of a need to validate those beliefs; social pressures would be greater to justify holding on to unpopular beliefs.
Because the false consensus effect is thought to be a motivational drive to restore an internal balance, its impact should be greater among those who are more concerned with projecting a normatively appealing image of themselves. When individuals are less concerned about how they appear to others, they should experience weaker drives to exhibit the false consensus effect. This predicted moderating effect of self-monitoring has, indeed, been empirically supported (Bauman & Geher 2002).
People’s tendency to believe that the media have greater influence on others (i.e., third persons) than on themselves has been termed the third-person effect (Davison 1983). Accordingly, this concept has been implicated in people’s proclivity to institute strict censorship of media content (especially when it comes to pornography and other deleterious effects on children) because of their belief that others will not possess the requisite mental faculty to remain unharmed. It is generally believed that third-person effects arise because of an exaggerated perception about effects on others as well as an underestimation of the effects on oneself. Furthermore, there are certain conditions that exacerbate these effects, including the perception that the source of the message is biased or that the content is harmful or negative.
The “influence of presumed influence” (Gunther & Storey 2003) is an extension of the third-person effect in that it posits that people modify their behaviors in response to their beliefs that the media have had an impact on others. In other words, because of the thirdperson effects, people come to believe that others have been influenced by the media, which results in people modifying their interactions with those they believe have been so influenced. Gunther and Storey (2003) first observed this phenomenon as a result of a radio-based intervention in Nepal that targeted health-care workers. Because lay persons were also exposed to the program, the lay persons came to believe that their health-care providers would have been influenced by the program and hence they expected greater service and more professionalism when they interacted with the health-care providers. This form of influence is particularly relevant when the recipients of a media intervention comprise individuals who were not directly targeted by the campaign.
Optimistic bias is the tendency to view oneself as being less vulnerable than others to various diseases and risk factors. We tend to believe that others, in comparison to ourselves, are more susceptible to risks and negative life events. Since Weinstein (1980) first articulated this term, researchers have found the existence of optimistic bias across a variety of health domains, and this finding seems to be robust across cultures. The magnitude of its effects decreases, however, when the target (the average other whose risk is being assessed relative to one’s own) is portrayed as being similar or when the risk factor in question is perceived to be uncontrollable.
Implications For Campaigns
The systematic way in which people misperceive reality has a number of implications for campaigns seeking to change individuals’ opinions and behaviors. Misperceptions can act as impediments to social change. Theories of normative influences predict that when people (falsely) believe they are in the minority (as in the case of pluralistic ignorance, for example), they are likely to underestimate the social support that they would receive in seeking social change, they would view their own stance as being abnormal, and they would be less willing to take on challenging tasks for fear of sanctions. Similarly, as the false-consensus effect would predict, individuals engaging in certain deviant behaviors may erroneously believe their behaviors are normal (thus obviating the need to change) when, in fact, they are not. Thus, campaigns seeking to bring about social change need to focus on understanding the extent to which their audience members harbor misperceptions about their social reality. This is also the approach adopted by interventions seeking to reduce college students’ alcohol consumption. These campaigns, based on what is known as the “social norms approach,” seek to educate students about the actual level of alcohol consumption on campus, under the assumption that students’ consumption will decrease once they realize that drinking moderately is the norm, not the exception.
The patterns of misperception described here also speak to the frame of reference that people use in making judgments about the need to make changes in their lives. If the frame of reference is well calibrated, the need to change may be well informed. If, however, the frame of reference is biased to support inaction (as when people engaging in high-risk behaviors perceive that their activities are within the norms of acceptable social practice), then there may be few motivations for change. One explanation for the existence of optimistic bias, for example, is that people use others (who are more at risk to a disease) to make assessments about their own relative risk (Rimal & Morrison 2006). Similarly, the third-person effect may exist because individuals judge their invulnerability to media influences in relation to others who, in their minds, are more susceptible. Hence, it seems that media campaigns need to understand which frames of reference people use to make assessments about the appropriateness of their own status. The biases that have been documented in the literature could be attributed to the inaccuracy of individuals’ judgments. They could also be attributed to the inappropriate frames of reference that people invoke in making such judgments, and, if so, it seems that interventions need to focus not so much on providing the appropriate source of information.
There is now an extensive body of work that documents the various misperceptions that are brought to bear when individuals are asked to gauge the opinions, beliefs, and behaviors of others. A lot of work has also been done in articulating the conditions that either exacerbate or ameliorate the patterns of these misperceptions. What is now needed is work that specifically seeks to determine how campaigns can reduce these misperceptions so that individuals are able to make decisions based on accurate information about others’ attitudes and behaviors. Furthermore, this work needs to be done through controlled studies that take temporal order into account as much of the extant literature is based on cross-sectional data that make it difficult to delineate cause from effect.
- Allport, F. H. (1924). Social psychology. Boston, MA: Houghton Mifflin.
- Bauman, K. P., & Geher, G. (2002). We think you agree: The detrimental impact of the false consensus effect on behavior. Current Psychology, 21, 293 –318.
- Davis, M. H., & Kraus, L. A. (1997). Personality and empathic accuracy. In W. Ickes (ed.), Empathic accuracy. New York: Guilford, pp. 144 –168.
- Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47, 1– 15.
- Fields, J. M., & Schuman, H. (1976). Public beliefs about the beliefs of the public. Public Opinion Quarterly, 40, 427– 448.
- Gavita, O. (2005). Can we read others’ minds? Rational beliefs, positive illusions and mental health. Journal of Cognitive and Behavioral Psychotherapies, 5, 159 –179.
- Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53, 199 –215.
- Holmes, D. S. (1968). Dimensions of projection. Psychological Bulletin, 69, 248 –268.
- O’Gorman, H. J. (1979). White and black perceptions of racial values. Public Opinion Quarterly, 43, 48 –59.
- Perkins, H. W., & Berkowitz, A. D. (1986). Perceiving the community norms of alcohol use among students: Some research implications for campus alcohol education programming. International Journal of the Addictions, 21, 961– 976.
- Rimal, R. N., & Morrison, D. (2006). A uniqueness to personal threat (UPT) hypothesis: How similarity affects perceptions of susceptibility and severity in risk assessment. Health Communication, 20, 209 –219.
- Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279 –301.
- Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. M. (2007). Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus. Journal of Personality and Social Psychology, 92, 821– 833.
- Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39, 806 – 820.