Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[link] Misinformation and Its Correction: Continued Influence and Successful Debiasing

12 Post author: Kaj_Sotala 23 November 2012 07:58AM

http://psi.sagepub.com/content/13/3/106.full

Abstract.

The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.

We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.

We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.

We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.

This is a fascinating article with many, many interesting points. I'm excerpting some of them below, but mostly just to get you to read it: if I were to quote everything interesting, I'd have to pretty much copy the entire (long!) article.

Rumors and fiction

[...] A related but perhaps more surprising source of misinformation is literary fiction. People extract knowledge even from sources that are explicitly identified as fictional. This process is often adaptive, because fiction frequently contains valid information about the world. For example, non-Americans’ knowledge of U.S. traditions, sports, climate, and geography partly stems from movies and novels, and many Americans know from movies that Britain and Australia have left-hand traffic. By definition, however, fiction writers are not obliged to stick to the facts, which creates an avenue for the spread of misinformation, even by stories that are explicitly identified as fictional. A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).

The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011). Few people would be so alert and mindful when reading fiction for enjoyment. These links between fiction and incorrect knowledge are particularly concerning when popular fiction pretends to accurately portray science but fails to do so, as was the case with Michael Crichton’s novel State of Fear. The novel misrepresented the science of global climate change but was nevertheless introduced as “scientific” evidence into a U.S. Senate committee (Allen, 2005; Leggett, 2005).

Writers of fiction are expected to depart from reality, but in other instances, misinformation is manufactured intentionally. There is considerable peer-reviewed evidence pointing to the fact that misinformation can be intentionally or carelessly disseminated, often for political ends or in the service of vested interests, but also through routine processes employed by the media. [...]

Assessing the Truth of a Statement: Recipients’ Strategies

Misleading information rarely comes with a warning label. People usually cannot recognize that a piece of information is incorrect until they receive a correction or retraction. For better or worse, the acceptance of information as true is favored by tacit norms of everyday conversational conduct: Information relayed in conversation comes with a “guarantee of relevance” (Sperber & Wilson, 1986), and listeners proceed on the assumption that speakers try to be truthful, relevant, and clear, unless evidence to the contrary calls this default into question (Grice, 1975; Schwarz, 1994, 1996). Some research has even suggested that to comprehend a statement, people must at least temporarily accept it as true (Gilbert, 1991). On this view, belief is an inevitable consequence of—or, indeed, precursor to—comprehension.

Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in." [...]

Is the information compatible with what I believe?

As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974). People assess the logical compatibility of the information with other facts and beliefs. Once a new piece of knowledge-consistent information has been accepted, it is highly resistant to change, and the more so the larger the compatible knowledge base is. From a judgment perspective, this resistance derives from the large amount of supporting evidence (Wyer, 1974); from a cognitive-consistency perspective (Festinger, 1957), it derives from the numerous downstream inconsistencies that would arise from rejecting the prior information as false. Accordingly, compatibility with other knowledge increases the likelihood that misleading information will be accepted, and decreases the likelihood that it will be successfully corrected.

When people encounter a piece of information, they can check it against other knowledge to assess its compatibility. This process is effortful, and it requires motivation and cognitive resources. A less demanding indicator of compatibility is provided by one’s meta-cognitive experience and affective response to new information. Many theories of cognitive consistency converge on the assumption that information that is inconsistent with one’s beliefs elicits negative feelings (Festinger, 1957). Messages that are inconsistent with one’s beliefs are also processed less fluently than messages that are consistent with one’s beliefs (Winkielman, Huber, Kavanagh, & Schwarz, 2012). In general, fluently processed information feels more familiar and is more likely to be accepted as true; conversely, disfluency elicits the impression that something doesn’t quite “feel right” and prompts closer scrutiny of the message (Schwarz et al., 2007; Song & Schwarz, 2008). This phenomenon is observed even when the fluent processing of a message merely results from superficial characteristics of its presentation. For example, the same statement is more likely to be judged as true when it is printed in high rather than low color contrast (Reber & Schwarz, 1999), presented in a rhyming rather than nonrhyming form (McGlone & Tofighbakhsh, 2000), or delivered in a familiar rather than unfamiliar accent (Levy-Ari & Keysar, 2010). Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font (Song & Schwarz, 2008).

As a result, analytic as well as intuitive processing favors the acceptance of messages that are compatible with a recipient’s preexisting beliefs: The message contains no elements that contradict current knowledge, is easy to process, and “feels right.”

Is the story coherent?

Whether a given piece of information will be accepted as true also depends on how well it fits a broader story that lends sense and coherence to its individual elements. People are particularly likely to use an assessment strategy based on this principle when the meaning of one piece of information cannot be assessed in isolation because it depends on other, related pieces; use of this strategy has been observed in basic research on mental models (for a review, see Johnson-Laird, 2012), as well as extensive analyses of juries’ decision making (Pennington & Hastie, 1992, 1993).

A story is compelling to the extent that it organizes information without internal contradictions in a way that is compatible with common assumptions about human motivation and behavior. Good stories are easily remembered, and gaps are filled with story-consistent intrusions. Once a coherent story has been formed, it is highly resistant to change: Within the story, each element is supported by the fit of other elements, and any alteration of an element may be made implausible by the downstream inconsistencies it would cause. Coherent stories are easier to process than incoherent stories are (Johnson-Laird, 2012), and people draw on their processing experience when they judge a story’s coherence (Topolinski, 2012), again giving an advantage to material that is easy to process. [...]

Is the information from a credible source?

[...] People’s evaluation of a source’s credibility can be based on declarative information, as in the above examples, as well as experiential information. The mere repetition of an unknown name can cause it to seem familiar, making its bearer “famous overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and hence more credible. Even when a message is rejected at the time of initial exposure, that initial exposure may lend it some familiarity-based credibility if the recipient hears it again.

Do others believe this information?

Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus. But, unfortunately, information can seem familiar for the wrong reason, leading to erroneous perceptions of high consensus. For example, Weaver, Garcia, Schwarz, and Miller (2007) exposed participants to multiple iterations of the same statement, provided by the same communicator. When later asked to estimate how widely the conveyed belief is shared, participants estimated consensus to be greater the more often they had read the identical statement from the same, single source. In a very real sense, a single repetitive voice can sound like a chorus. [...]

The extent of pluralistic ignorance (or of the false-consensus effect) can be quite striking: In Australia, people with particularly negative attitudes toward Aboriginal Australians or asylum seekers have been found to overestimate public support for their attitudes by 67% and 80%, respectively (Pedersen, Griffiths, & Watt, 2008). Specifically, although only 1.8% of people in a sample of Australians were found to hold strongly negative attitudes toward Aboriginals, those few individuals thought that 69% of all Australians (and 79% of their friends) shared their fringe beliefs. This represents an extreme case of the false-consensus effect. [...]

The Continued Influence Effect: Retractions Fail to Eliminate the Influence of Misinformation

We first consider the cognitive parameters of credible retractions in neutral scenarios, in which people have no inherent reason or motivation to believe one version of events over another. Research on this topic was stimulated by a paradigm pioneered by Wilkes and Leatherbarrow (1988) and H. M. Johnson and Seifert (1994). In it, people are presented with a fictitious report about an event unfolding over time. The report contains a target piece of information: For some readers, this target information is subsequently retracted, whereas for readers in a control condition, no correction occurs. Participants’ understanding of the event is then assessed with a questionnaire, and the number of clear and uncontroverted references to the target (mis-)information in their responses is tallied.

A stimulus narrative commonly used in this paradigm involves a warehouse fire that is initially thought to have been caused by gas cylinders and oil paints that were negligently stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang, 2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Some participants are then presented with a retraction, such as “the closet was actually empty.” A comprehension test follows, and participants’ number of references to the gas and paint in response to indirect inference questions about the event (e.g., “What caused the black smoke?”) is counted. In addition, participants are asked to recall some basic facts about the event and to indicate whether they noticed any retraction.

Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994).

When misinformation is presented through media sources, the remedy is the presentation of a correction, often in a temporally disjointed format (e.g., if an error appears in a newspaper, the correction will be printed in a subsequent edition). In laboratory studies, misinformation is often retracted immediately and within the same narrative (H. M. Johnson & Seifert, 1994). Despite this temporal and contextual proximity to the misinformation, retractions are ineffective. More recent studies (Seifert, 2002) have examined whether clarifying the correction (minimizing misunderstanding) might reduce the continued influence effect. In these studies, the correction was thus strengthened to include the phrase “paint and gas were never on the premises.” Results showed that this enhanced negation of the presence of flammable materials backfired, making people even more likely to rely on the misinformation in their responses. Other additions to the correction were found to mitigate to a degree, but not eliminate, the continued influence effect: For example, when participants were given a rationale for how the misinformation originated, such as, “a truckers’ strike prevented the expected delivery of the items,” they were somewhat less likely to make references to it. Even so, the influence of the misinformation could still be detected. The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it.

Multiple explanations have been proposed for the continued influence effect. We summarize their key assumptions next. [...]

Concise recommendations for practitioners

[...] We summarize the main points from the literature in Figure 1 and in the following list of recommendations:

  • Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.

  • Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.

  • To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.

  • Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.

  • Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.

  • Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.

  • If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.

  • You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.

(large version)

Comments (5)

Comment author: Stabilizer 25 November 2012 05:11:20PM *  8 points [-]

Something very weird happened to me today after reading this paragraph in the article yesterday:

Another particularly well-documented case of the persistence of mistaken beliefs despite extensive corrective efforts involves the decades-long deceptive advertising for Listerine mouthwash in the U.S. Advertisements for Listerine had falsely claimed for more than 50 years that the product helped prevent or reduce the severity of colds and sore throats.

I had not known earlier that Listerine had claimed to alleviate colds and sore throats. Today morning, as I was using my Listerine mouthwash, I felt as though the Listerine was helping my sore throat. Not deliberatively of course, but instantaneously. And my mind also instantaneously constructed a picture where the mouthwash was killing germs in my throat. This happened after I learned about the claim from a source whose only reason for mentioning it was that it was false. From a source about the dangers of misinformation.

Misinformation is more insidious that I suspected.

Comment author: Kaj_Sotala 23 November 2012 08:56:46AM 8 points [-]

For example, Berinsky (2012) found that among Republicans, corrections of the death-panel myth were effective primarily when they were issued by a Republican politician. However, judgments of a source’s credibility are themselves a function of beliefs: If you believe a statement, you judge its source to be more credible (Fragale & Heath, 2004). This interaction between belief and credibility judgments can lead to an epistemic circularity, whereby no opposing information is ever judged sufficiently credible to overturn dearly held prior knowledge. For example, Munro (2010) has shown that exposure to belief-threatening scientific evidence can lead people to discount the scientific method itself: People would rather believe that an issue cannot be resolved scientifically, thus discounting the evidence, than accept scientific evidence in opposition to their beliefs.

Amusingly, after I had read this bit, I realized that I had been trusting this article and its authors more because the claims it made sounded credible. That's a pretty vicious cycle: if somebody says things that support your beliefs, you judge them to be more credible, and then your beliefs get stronger because credible sources support them... maybe I should try to correct for that by trusting this article less. :)

Comment author: Morendil 23 November 2012 08:03:20AM 7 points [-]

Good stuff, thanks!

Relevant Wondermark comic.

Comment author: khafra 26 November 2012 06:08:44PM 1 point [-]

Caveats:

  • I haven't figured out a way to test this

  • Sample size of one

However; I believe I can often feel a characteristic affective state when my beliefs have just updated, and use that to trigger a manual review where I try to figure out what beliefs changed, what triggered the change, and whether I consciously endorse the change.

Comment author: shokwave 25 November 2012 12:29:45AM 1 point [-]

Provide an explicit warning before mentioning a myth ... Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.

This crystallised a few of the reasons I respect the Sequences as excellent writing.