Much research attention has been devoted to identifying the factors that affect people’s ability to detect others’ deceptive acts. Communication researchers typically have focused on the accuracy of judgments based on the verbal and nonverbal behaviors of a message source, unaided by technological devices or extra-interaction information. In general, results show that people are not very good at detecting deception, that people overestimate their ability to detect deception, and that people are more likely to believe that others are honest than deceptive, independent of the other person’s actual honesty.
Deception occurs when one person intentionally misleads another person. The most frequently investigated type of deception is the lie. Lying involves knowingly presenting false information. Other forms of deception include omission, evasion, and equivocation, but most deception detection research focuses exclusively on the ability to distinguish outright lies from complete truths (McCornack 1997). People are considered to be accurate when they judge truthful messages to be honest and deceptive messages to be dishonest.
Deception detection experiments usually involve showing research participants a series of truths and lies, and having them identify which are which. Judges are typically exposed to an equal number of truths and lies, and accuracy is most often calculated as a percentage correct averaging across truths and lies. More recent research also reports separate accuracy scores for truths and lies.
Three consistent findings are evident in the literature. First, people are significantly, but only slightly, above chance levels of accuracy (50 percent). Second, people overestimate their ability to detect others’ lies. Third, people are more likely to judge a message truthful than deceptive, independent of actual message veracity. Many research findings are counterintuitive, and common sense and conventional wisdom are often proven wrong in the realm of deception accuracy research.
People are, on average, 54 percent accurate in deception detection experiments (Bond & DePaulo 2006). This value is significantly greater than the 50 percent chance base rate, and is very stable, with few studies reporting values below 45 or above 65 percent. Although a number of variables affect detection accuracy rates, the impact of most of them is small in absolute terms. Nonverbal training improves accuracy only slightly. Judges tend to be slightly more accurate when presented with audio-only, audiovisual, or text-based messages than when only visual information is presented. Sender motivation actually improves accuracy. That is, the more motivated a liar, the more transparent the lie. Alternatively, variables that have little impact on accuracy include source expertise/ occupation, source–receiver relationship, extent of interaction, question-asking, and whether honesty values are scaled or dichotomous (Bond & DePaulo 2006).
There are several reasons why people tend to be inaccurate lie detectors. First, there do not appear to be any strong, cross-situation behavioral cues that make high accuracy possible. Although statistically reliable cues to deception are observed across studies (DePaulo et al. 2003), these cues are too inconsistent to be of much in use in detecting specific instances of deception. Second, people pay attention to cues that lack diagnostic utility. For example, there is a widely held belief that liars do not look other people in the eye, yet truth-tellers and liars do not differ in eye behavior, and eye gaze has no diagnostic utility (DePaulo et al. 2003). Third, research procedures preclude much potentially useful information for detecting lies. Research indicates that when people do detect lies in everyday life, it is often done well after the fact, and on the basis of information other than at-the-time source verbal and nonverbal behavior (Park et al. 2002). Instead, detection is often based on inconsistencies with prior knowledge, information from third parties, and physical evidence. Such information is not available in most deception detection experiments. Finally, people are generally truth-biased, and often fail to even consider the possibility of deceit (Levine et al. 1999).
Truth-bias refers to the tendency to believe another person independent of actual message veracity. Truth-bias likely stems from how people mentally represent true and false information and from tacit assumptions that guide communication (Levine et al. 1999; McCornack 1997). Truth-bias is more pronounced in face-to-face interaction, when communicating with a relationally close others, and when people are not primed to be suspicious. Because people are more likely to judge messages are truthful than deceptive, people are more likely to be correct at judging truths than lies. Accuracy for truthful messages is often well above 50 percent, and accuracy for lies is often below 50 percent.
Further, so long as people are truth-biased, the greater the proportion of honest messages judged, the greater the percentage of judgments that are likely correct (Levine et al. 1999).
The major research challenge facing detection research is making the experiments more realistic while maintaining ground truth. Ground truth means the actual truthful or deceptive nature of the message must be known so that accuracy can be determined. The need for ground truth leads most researchers to laboratory experiments. This makes studying realistic, higher-stakes, unsanctioned lies challenging. There is a growing recognition that the distinction between everyday lies and high-stakes lies is a meaningful one, and different methodologies may be needed to study different types of deception.
References:
- Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Review of Personality and Social Psychology, 10, 214 –234.
- DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbrick, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74 –118.
- Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125 –144.
- McCornack, S. A. (1997). The generation of deceptive messages: Laying the groundwork for a viable theory of interpersonal deception. In J. O. Greene (ed.), Message production. Mahwah, NJ: Lawrence Erlbaum, pp. 91–126.
- Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69, 144 –157.