One definition of risk communication is “communication with individuals (not necessarily face-to-face) that addresses knowledge, perceptions, attitudes, and behavior related to risk” (Edwards & Bastian 2001, 147). In the public health arena, we often hear about the dangers of poor lifestyle habits (e.g., smoking, drinking, not exercising, failure to vaccinate or screen for cancer) or how to engage in preventive health behaviors (e.g., taking aspirin to prevent heart disease). Often, the media, health providers, and even our family and friends craft persuasive communications designed to inform us about our risk of disease or other bad outcomes. The basic idea behind these messages, and consistent with many theoretical models of behavior change, is that increasing a person’s sense that something bad can happen to them, that is, perceived risk, will motivate behavior change to either prevent or diminish the threat. Thus, perceptions of risk entail not only the perceived probabilities of an event occurring – or not – but the negative consequences as well; negative consequences can encompass the physical, social, psychological, and economic realms.
“Risk” is a difficult concept to convey and one that is poorly understood by the public. A comprehensive understanding of risk that can guide informed decisions requires that patients know the antecedents (e.g., risk factors), likelihoods (probabilities), and consequences and preventive actions (pros and cons) necessary to control/avert risk, if possible (Weinstein 1999). Most attention in risk communication is focused on conveying probabilistic information, perhaps due more to the inherent complexities involved in describing uncertainty than of the other dimensions mentioned. A critical issue is whether those at the forefront of communicating risk conceptualize and craft messages targeting these dimensions. Admittedly, not all communications will involve discussion of each component above (e.g., probabilities of disease). For example, among those who test positive for disease (e.g., Huntington’s), focus may be spent on conveying details of the disease and how to cope with the consequences. In these situations, elements related to coping and illness perceptions (e.g., timeline, severity, controllability) can take precedence (Leventhal et al. 2001).
Communicating Numerical Risk
Numbers (e.g., percentages, frequencies) are often used to describe the magnitude of risk. Numbers have several appealing qualities: (1) they are precise, (2) they convey an aura of “scientific credibility,” (3) they can be converted from one metric to another (e.g., 10 percent = 1 out of 10), (4) they can be verified for accuracy – assuming enough observations, and (5) they can be computed using algorithms (e.g., Gail Score for breast cancer; Windschitl & Wells 1996). Further, it is assumed that many people appreciate numbers due to their mathematical training in school and/or occupation. For these reasons, and with advances in evidence-based medicine, numbers are likely to be used even more in clinical and other settings to convey risk.
A common marker of understanding is whether the public or patients are able to correctly reflect back a probability estimate. Those who provide approximately the same number are often viewed as understanding their magnitude of risk. After receiving numerical risk feedback, while accuracy generally improves, many individuals continue to provide a risk estimate that deviates from that received (e.g., in genetic risk; see Braithwaite et al. 2004). Do deviations suggest individuals do not understand their numerical risk? Below are various reasons, not meant to be exhaustive, why numerical risk estimates may deviate from those provided.
Format of information: individuals may have received their estimate in a format that was poorly understood. There appears to be wide variation in how risk is presented. For example, in conveying BRCA1/2 (genetic risk for breast cancer) risk, Butow and Lobb (2004) found high variability in the format counselors used (e.g., different combinations of numerical, verbal, and graphical displays) along with 23 different facts about risk. Wide variability in use of formats and a tendency to convey too much information may cause confusion rather than clarity. Communicators need to be aware of formats that facilitate the best processing of risk magnitudes or other relevant risk information (e.g., natural frequencies, graphs; Gigerenzer & Hoffrage 1995; Ancker et al. 2006). Of import, no single format may be best in all situations.
Low message engagement: as a result of the above or other processes (e.g., relying on an expert to interpret information; poor math skills) individuals may not have fully engaged with the information provided.
Biased motivated processing: individuals may not have believed or may have distorted the magnitude of the risk in order to reduce the perceived threat or to interpret it in a more favorable manner (e.g., optimistic bias; Klein & Weinstein 1997).
Incorporated behavior change: individuals may have taken into account any future actions in their estimate, thus diminishing or increasing the perceived risk. The estimate will then reflect a future rather than the current state of being.
Mood states: both positive and negative moods can affect the processing of (risk) information (Lowenstein et al. 2001; Slovic et al. 2005). Recent models merging perceived risk and affect, such as “risk as feelings” and the “affect heuristic,” suggest that individuals may use their current affective state as a source of information from which to derive risk perceptions. Use of heuristics: individuals use mental shortcuts (i.e., heuristics) to make judgments. For example, they may feel that they resemble the type of person who is at higher/lower risk for disease (i.e., representative heuristic), recall instances of people with or without the condition, making the event seem less/more likely (i.e., availability heuristic), or be influenced by other numerical estimates they encountered (i.e., anchoring; Tversky & Kahneman 1974). These heuristics can distort perceived risk estimates.
Insensitivity to risk assessment measures: depending on how risk judgments are assessed (e.g., numerical vs verbal scales), responses may be more or less valid and reliable. Indeed, there is no “gold standard” for evaluating risk perceptions (Diefenbach et al. 1993), and whether responses can be used to judge accuracy of risk judgments involves meeting certain requirements (Windschitl & Wells 1996).
As the above suggest, caution should be used when interpreting whether someone understands a numerical risk when a mismatch or match occurs. A match may simply be a person reflecting back information rather than true understanding. Insights to understanding may be obtained by whether the person asks relevant questions and paraphrases the meaning of what was conveyed in a manner that stems logically from the information provided. The key issue is whether the interpretation of the information results in communication errors or not, as discussed below.
Evaluating The Efficacy of Risk Communications
How do we judge whether our risk communications are effective? There exist very few guidelines on this issue (Rohrmann 1992; Weinstein & Sandman 1993), and these vary by whether the communications are focused on education, persuasion, crisis management, or conflict management. Below, and in addition to what others have suggested (Rohrmann 1992; Weinstein & Sandman 1993), are ways by which the efficacy of risk communication processes might be judged.
Engagement in recommended behavior(s): a risk communication is deemed effective if resulting perceptions of risk lead to the recommended behavior. A risk communication would be judged ineffective or even detrimental if it causes the person to enact inappropriately (e.g., failure to get mammograms because personal risk estimate was low).
Paying attention to the message: a key step to any communication is whether the target audience paid attention to the message. Risk messages that are attended to, as reflected in such outcomes as recall, use, and dissemination to others, in addition to any actions taken, can be considered in some situations as being effective.
Acquisition of factual knowledge: did your communications result in greater understanding of the phenomenon in question, especially in relation to the dimensions of understanding risk discussed earlier (e.g., knowing of risk factors, what to do)?
Judging the direction of risk magnitude: while an individual may accurately reflect back a risk magnitude, they may judge the risk as objectively higher or lower than the source’s intended meaning. Thus, while the take-home message might be that the person’s risk is actually higher/lower, the person does not walk away with this impression.
Conflict/trust: does presentation of risk information result in greater or less trust and/ or conflict (e.g., outrage)?
Evocation of extreme negative/positive affect: do individuals after receipt of risk information express undue anxiety, stress, or anger? Conversely, do they express unexpectedly high levels of positive affect in light of high probabilistic negative outcome(s)? These issues involve psychological well-being as outcomes.
Judging perceived risks/benefits: assuming individuals are aware actions can be taken to reduce their risk, they may not fully understand the benefits and costs of the actions (e.g., mastectomy to reduce breast cancer risk). They may fail to appreciate fully and/or know how to balance the risks and benefits (e.g., how much is my risk reduced in light of the possible side effects?).
Consistency between values and decisions/actions: as a result of the discussion, is the person’s decision to take some form of action consistent or not consistent with his/her values?
Considerations in Communicating Risk
Here are some considerations for those who wish to design risk communication interventions. The reader can also refer to an excellent text on this issue (Witte et al. 2001). In addition to clarifying and clearly specifying the goal(s) of your communication, accessibility and channel of dissemination (e.g., web, print, phone), understanding your target audience (e.g., needs, values, prior experiences with health issue), context (e.g., socio-political environment), and resources (e.g., staffing, costs), attention should be given to the following aspects.
Content areas of risk: will your communications cover risk factors (e.g., physiological, environmental, genetic), forms of probabilistic information and their utilities (absolute, relative, attributable), important consequences (e.g., social, physiological, psychological, economic), and methods of prevention, if possible (e.g., lifestyle changes, surgical, medicinal)?
Common communication formats, their strengths and weaknesses: will you communicate information numerically, verbally, and/or graphically? Know the strengths and weaknesses of each approach.
Biases in perceptions of and personalization of risk: are there reasons to believe your audience may review and process the messages in some biased fashion (e.g., selective attention to information, react optimistically or pessimistically, superficially, counterarguments)? If so, examine why this might happen and ways to curb the potential effects of such biases.
Probing for understanding: how will you judge whether the target audience understands your message? In this, as in so many other situations, pilot test your materials.
Evaluating the efficacy of risk communications: how will you judge whether your communication was effective? For example, if greater factual knowledge is gained but with no accompanying recommended behavior change, is this considered a success, a failure, or neither?
In sum, the application of risk communication to diverse areas of health will only increase in the future. Effective risk communication involves the consideration of a multitude of factors, some of which have been highlighted here. Risk communication is anything but easy and poses formidable challenges. Yet when done effectively, the rewards for the public and patients can be immense.
References:
- Ancker, J. S., Senathirajah, Y., Kukafka, R., & Starren, J. B. (2006). Design features of graphs in health risk communication: A systematic review. Journal of the American Medical Information Association, 13(6), 608–618.
- Braithwaite, D., Emery, J., Walter, F., Prevost, A. T., & Sutton, S. (2004). Psychological impact of genetic counseling for familial cancer: A systematic review and meta-analysis. Journal of the National Cancer Institute, 96, 122–133.
- Butow, P. N., & Lobb, E. A. (2004). Analyzing the process and content of genetic counseling in familial breast cancer consultations. Journal of Genetic Counseling, 13, 403– 424.
- Diefenbach, M. A., Weinstein, N. D., & O’Reilly, J. (1993). Scales for assessing perceptions of health hazard susceptibility. Health Education Research, 8, 181–192.
- Edwards, A., & Bastian, H. (2001). Risk communication: Making evidence part of patient choices. In A. Edwards & G. Elwyn (eds.), Evidence-based patient choice: Inevitable or impossible? Oxford: Oxford University Press, pp. 144–160.
- Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684–704.
- Klein, W. M., & Weinstein, N. D. (1997). Social comparison and unrealistic optimism about personal risk. In B. P. Buunk & F. Gibbons (eds.), Health, coping, and well-being: Perspectives from social comparison theory. Mahwah, NJ: Lawrence Erlbaum, pp. 25–61.
- Leventhal, H., Leventhal, E. A., & Cameron, L. (2001). Representations, procedures and affect in illness self-regulation: A perceptual-cognitive model. In A. Baum, T. Revenson, & J. Singer (eds.), Handbook of health psychology. Mahwah, NJ: Lawrence Erlbaum, pp. 19–47.
- Lowenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127, 267–286.
- Rohrmann, B. (1992). The evaluation of risk communication effectiveness. Acta Psychologica, 81, 169–192.
- Slovic, P., Peters, E., Finucane, M. L., & MacGregor, D. G. (2005). Affect, risk, and decision making. Health Psychology, 24(4, suppl.), S35–S40.
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.
- Windschitl, P. D., & Wells, G. I. (1996). Measuring psychological uncertainty: Verbal versus numerical methods. Journal of Experimental Psychology, Applied, 2, 343–364.
- Weinstein, N. D. (1999). What does it mean to understand a risk? Evaluating risk comprehension. Journal of the National Cancer Institute Monographs, 25, 15–20.
- Weinstein, N. D., & Sandman, P. M. (1993). Some criteria for evaluating risk messages. Risk Analysis, 13, 103–114.
- Witte, K., Meyer, G., & Martell, D. P. (2001). Effective health risk messages: Step-by-step guide. Thousand Oaks, CA: Sage.
Back to Communication and Social Change.