Helping science succeed
Helping science succeed

Changing opinions Part 2: How society can overcome belief perseverance

Once upon a time, we had the luxury of time to learn about the root causes of misinformation and disinformation, and experiment with strategies for how to overcome our enduring belief in lies and half-truths. Today, however, as the impact of these forces wreaks havoc on societies around the world, funders and policymakers have imbued the search for answers and solutions with a certain amount of urgency.

To-date, our strategies for combatting misinformation and disinformation and their lingering effects fall into two main categories: prebunking, and debunking. Prebunking strategies cover a wide range of range of activities, including but not limited to:

  1. Warnings: Warning people that the information they are about to see might be misleading can help them treat incoming information more skeptically. An added layer to simply warning people is “inoculating” them—providing specific examples of how an audience will be misled so they can be even more cognitively prepared. For example, a defense attorney might tell a jury that the prosecution is going to put an “expert” on the stand who is not an expert at all, so jurors should take this person’s testimony with a grain of salt. Using humor can also be a great approach here—any approach that isn’t seen as partisan. See this series of videos put out by the Truth Labs for Education that are designed to help inoculate viewers against misinformation and disinformation techniques like using false dichotomies (most of these tools have proved effective in a lab setting).
  2. Media training: Familiarizing people with how the media works can help them better understand why some ideas sell and spread, observe how and why people can be quoted out of context, and so on.
  3. Games: Hands-on games can help people experience what media manipulation looks like for real—for example, games that let students be media tycoons and earn points for getting followers by emulating bad online behavior.
  4. Critical thinking training: It’s vital to teach students to analyze information objectively, and to assess evidence. Some of this training falls into the category of “information literacy” training; some is just good education. Encouraging healthy skepticism and a curiosity-driven learning environment helps individuals develop the capacity to question their beliefs, recognize their biases, and be open to new ideas. Along these same lines, well-structured classroom discussions can effectively counteract belief perseverance.
  5. Exposure to diverse perspectives: Engaging with diverse viewpoints can disrupt echo chambers, decrease polarization, and reduce belief perseverance (social media platforms as currently constructed excel at reinforcing existing beliefs).
  6. Promoting empathetic communication: Engaging in empathetic and respectful conversations can create an environment conducive to open-mindedness. When individuals feel understood and respected, they are more likely to lower their guard and consider alternative viewpoints. Techniques such as active listening and empathy-building exercises can facilitate constructive discussions.
  7. Encouraging self-reflection: Cultivating self-awareness and self-reflection can help individuals recognize their own biases and confront their belief perseverance tendencies. Journaling, mindfulness practices, and guided introspection can aid in identifying instances where beliefs are being clung to despite contradicting evidence. Studies have shown that individuals have difficulty recognizing their own biases, but at the same time are fully capable of identifying biases in others. Encouraging introspection and self-reflection can bridge this gap, allowing individuals to acknowledge and address their own belief perseverance.

Studies have demonstrated that prebunking strategies like these can help reduce the uptake and spread of misinformation and disinformation. But this is not the case all of the time, plus in the real world, these strategies alone are at most capable of only putting a dent in misinformation and disinformation, not eliminating it entirely. So debunking is also needed.

Why? The short answer is that misinformation and disinformation come at us like a firehose in everyday life—emotionally manipulative language, false equivalencies and dichotomies, scapegoating, ad hominem attacks, and gaslighting from “news” shows, social media channels, friends and family, and “Trump Won” flags. There is no way we can carefully evaluate all this information before it enters our brain, and only then let in the absolute truth. Like it or not, we absorb a lot of information through our perception net, true and false, and this information gets stored in, judged by, and recalled by our brains. The difficulty of then correcting or retracting this stored information after the fact depends on a kaleidoscope of considerations, like whether this information aligns with our worldview, whether we think the sources (of the original information and retracted information) are credible, whether the new and old information is plausible, and so on.

Therefore, debunking has its limits. There are challenges aplenty based on the audience and the type of information retraction or correction being attempted. For example, research has shown that while retractions can be effective on all groups regardless of their worldview, political orientation, biases, and so on, corrections are a different story. Corrections can be much less effective if they are seen as coming from an untrustworthy source because this information is inconsistent with other closely held beliefs, or is less familiar, more difficult to process, less coherent, or less supported by one’s social network. In fact, when corrections are too at odds with a person’s worldview, a backfire effect can occur, and people can become even more committed to the misinformation being corrected.

Retractions also rarely, if ever, have the intended effect of entirely eliminating misinformation, even when people believe, understand, and later remember the retraction. A retraction can reduce references to disinformation, have no effect, backfire, and even harden unreceptive audiences to believing any future corrections or retractions.

A phenomenon that is closely linked to the backfire effect is belief polarization. Here, personal beliefs act as the lens through which all information is filtered. When we receive information through this lens that is consistent with our beliefs, the information is enhanced, but when this information is inconsistent, it is discounted or discarded. This phenomenon helps explain why highly educated Democrats can believe in climate change and highly educated Republicans can think climate change is a hoax. The facts in this matter aren’t being processed by computers, but through the lens of our preexisting belief systems. Indeed, belief-threatening scientific evidence can cause people to discount science entirely. People would rather believe science is wrong than accept evidence that runs counter to their beliefs.

The ineffectiveness of retractions and corrections is owing to several reasons. One is that we build mental models of information artifacts. When a central piece of this model is invalidated, we can fill the gaps with information that may be inaccurate but that is at least congruent—information that makes sense to us based on what else we know and believe. If filling this gap becomes too difficult, then it’s just easier to ignore the new information and stick with the original idea.

Another reason retractions and corrections are problematic is because our memory does a poor job of encoding negatives—this information is NOT true. When we recall an event, we’ll recall both the true and the NOT true versions, and this negative encoding can simply be lost over time.

A third reason why debunking can be ineffective is that when we debunk something, we can end up repeating the same initial lie. Over time, this repetition serves to enhance the familiarity of the original lie, and paradoxically, this enhanced familiarity makes the lie seem more plausible because we’ve heard it so often. This effect becomes more pronounced with age as cognitive function declines, and we’re less able to recall the sources of the information stored in our brains.

Yet another challenge to corrections and retractions is our social reaction. People don’t like to be told what to do. So, when people are told to ignore a certain statement, researchers have shown that they actually do the opposite and pay more attention to the statement. People can also react negatively to too much protest. Overdoing it with our corrections can end up reducing confidence in the veracity of these corrections. Emotiveness also plays a role in our social reaction. The likelihood people will pass on information (or misinformation) is strongly tied to the likelihood it will elicit an emotional response from our networks. Lies spread faster and farther than truth, in part because we spread lies through our social networks in order to get a “wow!” or “thumbs up!” reaction, not to inform each other of bland facts.

And then, of course, there’s simply the practical problem of correcting or retracting for everyone who gets exposed to the misinformation and disinformation. If someone sees a Facebook headline about how the moon landings were faked, and feels so inclined to forward this post to their network of 100 friends and family, many who in turn may share this information with their friends and family, how can we possibly get debunking information to everyone? This challenge was less daunting in the age before social media; today, it’s a practical impossibility.

Alas, there may be debunking strategies that work. Studies have shown that the perseverance of our belief in misinformation and disinformation can be reduced if we provide an alternative account that fills the gap left behind by retractions and corrections. This alternative explanation needs to contain all the elements target audiences need to effectively patch their memory holes: plausibility, credibility, explaining why the original information was incorrect, why this information was thought to be correct in the first place, and if possible, also explaining the motive behind the original information. This alternative also needs to be simple. Overly complex explanations will be ignored—it’s easier to stick with the simple explanation, even if it’s wrong.

It’s also important that these correction and retraction messages be tailored to each audience, so this information is consistent with the audience’s worldview, and also gives this audience a chance to affirm their basic values as part of the corrective process. Anyone who has helped a five-year-old decide to do the right thing knows this process well—it isn’t rocket science, and yet somehow it is. Effective messaging needs to give something back to everyone and empower the recipients to feel good about their decisions—to both recognize intuitively and see explicitly that this new path has more benefits than the old. Large leaps aren’t necessary in this model, either. Nudging people toward the right behavior can be far more effective over the long term, without entirely removing their freedom of choice.

Argumentation and engagement with misinformation and disinformation can also be effective. Rather than bluntly countering it, or providing alternative explanations, researchers have shown that another effective approach is to simply engage with it and let it collapse under its own weight. Confronting lies and distortions in this way can help people work through inconsistencies in their understanding and can help them accept corrections.

Finally, it’s important to note that correcting misinformation and disinformation is, to us humans, cognitively indistinguishable from gaslighting—from deliberately misinforming people to replace their preexisting correct beliefs. That is, the same techniques we can use to help repair the information ecosystem are being used to make it even worse.

Source: Lewandowsky 2012

The fact that misinformation and disinformation happen at scale makes debunking particularly difficult unless it can also happen at scale. Therefore, a number of funders are also backing efforts to do just this—to develop a more robust debunking response capacity. The National Science Foundation, for example, is funding work that will help us instantly recognize the unique digital signature of disinformation as it starts to spread, and then activate aggressive measures to help stop this spread in its tracks before it can go far. The Scripps-Howard Fund is backing an effort that will allow journalists worldwide to collaborate on investigations into misinformation and disinformation campaigns. And LinkedIn co-founder Reid Hoffman and philanthropist George Soros are backing an effort that will enable global media to collaborate more effectively in combatting disinformation.

Much work remains to be done, though, both in understanding the psychology of misinformation and disinformation, and understanding which counter approaches actually work at scale in the real world, and then, of course, in building the tools and networks that will work best. SCI, in partnership with the History Communication Institute, has submitted a grant proposal to bring together the best minds and ideas in this field and help lead the effort to roll out the best solutions. We’ll keep you posted on our progress.

 

Additional reading

Glenn Hampson

Glenn is Executive Director of the Science Communication Institute and Program Director for SCI’s global Open Scholarship Initiative. You can reach him at [email protected].