Twenty-first-century mass communication scholars rarely question the existence of media effects. Research has presented significant and consistent evidence that the mass media have noticeable and meaningful effects. Evidence comes not only from the accumulation of the body of different studies, but from the various meta-analyses that organize various research studies and combine their findings to assess the direction and strength of media effects. In general, research finds that media effects are modest, small to moderate in size. Conclusions about the strength of media effects, however, must be tempered by considerations of research methodology (laboratory compared to more natural settings), the effects of different types of media content (pro-social compared to antisocial), and the effects of routine compared to unusual content.
The Strength Of Media Effects
Meta-analysis is the primary method for determining the strength of media effects. Metaanalysis is a research technique that locates published and unpublished studies about different media effects. Then, the results from those studies are combined and averaged to ascertain the overall size of effects, across a range of studies conducted by many different researchers at different times, in different places, and with different samples. The strength of this approach is its cumulative character; combining a range of studies allows not only for conclusions about strength of impact, but also for researchers to begin to be able to generalize about the types of people affected and to identify the different variables that might enhance or mitigate media effects.
Meta-analyses reveal that media effects can best be described as small to moderate. Two statistical measures are typically used to describe the strength of media effects. Pearson’s correlation (r) ranges from −1.00 to 1.00, where values closest to either end point are the most substantial. Values close to 0.0 mean that there is no connection between media exposure and the effect. A second measure, d, is a measure of the difference, in standard deviations, between the control group and the experimental group. Larger values for d indicate larger effects.
In some cases, media’s impact is fairly strong. Recent meta-analyses (see Preiss et al. 2007), for example, show that the agenda-setting effect is among the largest of our field. Overall, across 90 studies, the relationship between the media and audience agendas is r = 0.53.
Various meta-analyses have identified moderate effects of media violence. In 1986, exposure to television violence accounted for about 0.3 of a standard deviation (d = 0.30) in negative effects. Updated meta-analyses find that impact has grown a bit larger. In 1994, scholars found that the effect size of television violence was d = 0.65 (r = 0.31). Replication of meta-analyses of the effects of media violence on observed aggressive behavior also reveal a small increase in effect sizes. A 1991 meta-analysis located an effect size of d = 0.27; the updated 2007 meta-analysis found an effect size of d = 0.35.
Other media effects are a bit smaller. The negative effects of pornography range from r =0.11 to r = 0.22. The connection between playing video games and aggression is r = 0.15. The effects of stereotyped media content and sex-role stereotyping range from r = 0.11 to r = 0.31 across surveys and experiments as well as studies conducted in the US and other countries.
Media content has pro-social effects. Pro-social messages targeted toward children have a moderate effect: r = 0.23. Media campaigns designed to encourage people to adopt healthy behaviors and practices have stronger impacts (r = 0.12) than those encouraging cessation of unhealthy behaviors (r = 0.05).
Interpreting The Evidence
The statistical evidence for media effects is modest, considering the amount of time spent with various media. Here is a context for interpreting the size of media effects. Metaanalyses in other fields have found that the effect of gender on height is d = 1.20; the effect of one year of elementary school on reading ability is d = 1.00; tutoring on math skills is d = 0.60; drug therapy on psychotics is d = 0.40 (Hearold 1986). For r, squaring the value allows us to see how much of the variance between two variables is accounted for. So, meta-analyses show that exposure to pro-social messages accounts for 5.3 percent of the variance in pro-social actions in children.
There is evidence the strength of media effects varies. Effects of media violence are larger in laboratories (d = 0.80) than in the real world (d = 35 in natural experiments and d = 0.38 in surveys). The control and precision of the laboratory experiment magnifies the effects of exposure to media content. There is also evidence that effects of pro-social media content are larger than those of antisocial media content. Moreover, effects can be stronger when encouraging adoptions (such as seat belt use, fruit and vegetable consumption) than when promoting behavior cessation (e.g., smoking, alcohol use). Clearly, media have a larger impact on socially encouraged attitudes and behaviors and those that are easier to enact. There is also evidence to suggest that unusual media messages are likely to have a greater impact than routine ones. Research on Magic Johnson’s 1991 announcement that he was HIV-positive had much larger effects on knowledge about HIV and AIDS, attitudes toward HIV-positive people, and desire for more information about HIV than more routine messages. Salient, or atypical messages, are likely to have greater impact.
The effects of mass communication might be small to moderate, but they are certainly quite meaningful because of the size of the audience and the importance of the outcomes. While the effects of media health campaigns, for example, are smaller than the effects of clinical interventions (r = 0.09 compared to r = 0.27), media campaigns are more cost-effective and reach far more people. The small effects found for media health campaigns cannot be dismissed, because even small effects sizes mean that large numbers of people have been influenced. Those who conduct research on television violence estimate that eliminating television violence could reduce aggression in society by small but significant amounts. Small effects of mass communication translate into large groups of people being affected.
Problems In Interpreting Evidence Of Media Effects
Despite the presumption of media effects and the evidence drawn from meta-analyses, there are still some areas of disagreement regarding media effects. The most substantial media effects are found in laboratory experiments. There is a good deal of value in conducting laboratory experiments because researchers can control the type and amount of media exposure and assess time order, or causation. The control of laboratory settings, though, is also a weakness. Exposure to media content in a laboratory setting is unnatural and cannot account for selective exposure. Experimental participants might be shown television content (e.g., violence or sexual content) that they would never seek out on their own. Much media content is consumed in the presence of friends and families, who are not able to exert influence in laboratory settings. Moreover, the dependent measures used in laboratories are often quite artificial. Hitting Bobo dolls or pushing buttons to “shock” people don’t translate to real-life actions. Hovland (1959) also points out that experiments typically focus on short-term effects, measured fairly soon after exposure to media content. The nature of experimental control cannot assess the endurance of effects after the experimental session.
Laboratory experiments can also introduce experimenter effects, or effects that are due to the actions of the experimenter, rather than the experimental stimulus. When an experimenter presents content to research participants, they might assume that the experimenter approves of the content, even if it is violent, sexual, or stereotypical. As Hovland (1959) noted about persuasion research, messages presented in a laboratory are likely to have stronger effects because of the credibility of the experimenter. Participants might also assume that the various actions presented to them in a laboratory, even if they are inappropriate or undesirable, are also sanctioned by the experimenter. Finally, Hovland explains that researchers often select media content designed to magnify differences between experimental and control conditions. These extreme selections are often atypical of media content seen in the real world. Rosenthal (1979) estimated that various experimental effects range from d = 0.23 to d = 1.78. So, experimental effects could account for much of the media impact of laboratory settings.
Are Media Effects Stronger Than Evidence Suggests?
Despite concerns about the problems with laboratory research, most scholars agree that media effects are substantial and meaningful. There are several reasons to believe that research underestimates media effects because of methodological imprecision and conflicting theoretical forces.
Outside of the laboratory, measures of media exposure are imprecise and subject to a good deal of measurement error (Webster & Wakshlag 1985). Media use is typically a private activity and often inattentive. Assessing media exposure by asking people to estimate how much time they spend is fraught with inaccuracy. Even observations of media use cannot assess level of attentiveness. Media effects might be stronger if researchers could access accurate measures of attentive media use.
For ethical reasons, researchers often limit dependent variables to those that cannot harm research participants. So, studies rarely give participants opportunities to enact behaviors that might reflect media impact. Instead, researchers assess attitudes, perceptions, and reactions to hypothetical situations. These “diluted” measures might not be the most valid and accurate ways to assess the impact of the mass media.
Most theories of media effects assume a linear relationship between media exposure and impact, that is, as exposure to media increases, so will the likelihood of the effects of that content. Nonlinear processes are often not explored. Some media effects processes might be curvilinear, that is, effects might increase to only a certain point. Or, there might be a threshold process, so that media content has no impact until a threshold level of exposure is reached. Greenberg (1988) proposed a drench hypothesis of media effects. Instead of media content having a “drip, drip” cumulative effect, Greenberg suggests that some media images are so powerful that they command attention and have strong effects.
The main reason, however, that media effects appear limited is that it is impossible to isolate media’s impact in most developed societies. It is nearly impossible to find someone who has not been exposed to mass media. And, even those people who don’t watch much television or read newspapers or surf the world wide web interact regularly with others who do. Media’s influence can go beyond direct exposure to the media; it is filtered through other social contact.
References:
- Greenberg, B. S. (1988). Some uncommon television images and the drench hypothesis. In S. Oskamp (ed.), Applied social psychology annual, vol. 8: Television as a social issue. Newbury Park, CA: Sage, pp. 88 –102.
- Hearold, S. (1986). A synthesis of 1043 effects of television on social behavior. In G. Comstock (ed.), Public communication and behavior, vol. 1. Orlando, FL: Academic Press, pp. 65 –133.
- Hovland, C. I. (1959). Reconciling conflicting results derived from experimental and survey studies of attitude change. American Psychologist, 14, 8 –17.
- Perse, E. M. (2001). Media effects and society. Mahwah, NJ: Lawrence Erlbaum.
- Preiss, R. W., Gayle, B. M., Burrell, N., Allen, M., & Bryant, J. (2007). Mass media effects research: Advances through meta-analysis. Mahwah, NJ: Lawrence Erlbaum.
- Rosenthal, R. (1979). The “file drawer problem” and tolerance for null results. Psychological Bulletin, 86, 638 – 641.
- Webster, J. G., & Wakshlag, J. (1985). Measuring exposure to television. In D. Zillmann & J. Bryant (eds.), Selective exposure to communication. Hillsdale, NJ: Lawrence Erlbaum, pp. 35 – 62.