My Topics

Bitefile strategies against misinformation

Many people are fooled by misinformation. Social media is the main place for sharing misinformation, as there is little control over the messages shared, and their reach is vast. Incorrect information is spread and sometimes even goes viral because many people share it. This can have significant consequences for society and democracy. How can media users become more resilient to misinformation? Scientific research focuses primarily on three different intervention techniques: vaccination (inoculation), nudging, and fact-checking. How effective are these intervention techniques? And what are their advantages and disadvantages? Bitescience examined the research and summarized the key insights. 

The bitefile was made in collaboration with the Dutch Media Literacy Network. All consulted literature can be found here.

What Is Misinformation and How Can You Recognize It?

Incorrect information comes in various forms and can be spread either intentionally or unintentionally. In the academic literature, misinformation refers to all types and disinformation only to intentional misinformation. Therefore, in this Bitefile, we use the term misinformation. One form of misinformation that has been extensively studied is fake news on social media: false news reports deliberately spread to mislead or influence public opinion. The exact amount of fake news and other forms of misinformation online is unclear. Although percentages vary per website and platform, research shows that most online information is reliable.

Research indicates that media users need certain knowledge and skills to recognize misinformation.

Three key knowledge elements:

  • Forms of misinformation
  • Characteristics and techniques of misinformation
  • How misinformation is spread

Five key skills:

  • Evaluating the source of a message
  • Assessing the purpose of a message
  • Evaluating the tone of a message
  • Assessing the type of information in a message
  • Evaluating images or videos

Vaccinating Against Misinformation

The call to make media users, young and old, more resilient to misinformation is growing louder. One widely studied intervention method to achieve this is inoculation. Inoculation, also known as ‘prebunking’, is a method that can be seen as a vaccination against misinformation. A small dose of disinformation is introduced, together with an explanation, so people are better prepared to resist it later.

Here’s how it works: people are exposed to various forms of misinformation. They also receive information about these forms, the characteristics and manipulation techniques of misinformation, how misinformation spreads (knowledge), and what they can do to recognize it (skills).

Inoculation is most effective when people actively engage with this knowledge and these skills by creating misinformation themselves. The idea is that going through this process helps people protect themselves from disinformation, because as they’ve practiced creating disinformation, they are better at spotting and rejecting it. An example of a media literacy program using inoculation is the serious game Bad News.

Research shows that inoculation can have the following effects:

  • People can better recognize misinformation, and the misleading techniques used within it.
  • They become more confident in recognizing misinformation.
  • They are less likely to share unreliable messages with others.

Inoculation can have advantages:

  • It trains people to better recognize misinformation, reducing their trust in it.
  • It can be used to build resilience against various forms of misinformation.

But there are also some side notes:

  • A spillover effect may occur, meaning that people become more skeptical not only of misinformation but also of reliable information.
  • Effective inoculation requires active participation, as individuals must create misinformation themselves, which takes time and effort. Not everyone will be equally motivated to participate.
  • It is unclear how long the positive effects of inoculation last and whether, or when, the intervention needs to be repeated.

A Nudge in the Right Direction

Recognizing and resisting misinformation can be challenging for media users, even after being ‘vaccinated’ against it. Research shows that websites and platforms can help their users by applying nudging. Nudging is an intervention method used to positively steer people's behavior. It can be seen as a gentle push in the right direction.

An example of a nudge is a pop-up message that appears alongside posts on websites or social media platforms. This message reminds people to consider the reliability of a post before sharing it (e.g., "How reliable do you think this post is?").

Most people want to share reliable information with others. However, we are often distracted by the vast amount of information and stimuli on social media, causing us to forget to assess the reliability of messages. Research shows that a nudge -such as the pop-up message described above- helps direct people’s attention to the reliability of information. This process is automatic and requires little effort from users.

Nudging has several advantages:

  • It can reduce the sharing of unreliable messages while increasing the sharing of reliable ones.
  • Nudges are easy, quick, and inexpensive to implement.
  • It can be applied on a large scale as it is flexible and can be tailored to a website or platform's structure.

But there are some sidenotes:

  • A spillover effect may occur, making people more critical of both misinformation and reliable information.
  • If someone has no prior knowledge of a topic and the message appears credible, even increased accuracy awareness may not help them determine its truthfulness.
  • It is unknown how long the positive effect of a reliability nudge lasts and how often it needs to be repeated.
  • Websites and platforms must implement reliability nudges, but it is uncertain whether they are willing to do so.

How Effective Is Fact-Checking?

Another way websites and platforms can help users recognize misinformation is by adding fact-check labels to messages. Fact-checking involves verifying the truthfulness of messages. This can be done by humans or by algorithm-based fact-checkers. If a message is (partially) false, this is indicated. Posts on Instagram and Facebook, for example, receive a special label if fact-checkers have assessed them as (partially) false, along with an explanation of why the information is incorrect. Fact-checkers help people evaluate the reliability of information.

Research shows that fact-check labels have the following effects:

  • People have less trust in messages labeled as (partially) false and are less likely to share them.
  • The amount of reliable information being shared increases.

Fact-checking has various advantages:

  • Fact-checkers only reduce trust in labeled messages, avoiding a spillover effect.
  • It requires little effort from media users since they do not have to assess the information’s reliability themselves.

But there are side notes:

  • Misinformation (even when “fact-checked”) may be remembered better and, over time, increasingly feel true despite being labeled false.
  • Fact-checking helps people evaluate information but does not teach them the knowledge and skills needed to do so independently. Thus, fact-checking should complement media literacy education rather than replace it.
  • They are less effective for children, especially those under 10, as they struggle to connect sources and information.
  • Labeling certain information as false may lead people to assume that unlabeled information is reliable, even if it has not been fact-checked.
  • Professional and algorithmic fact-checking can never verify all online information, so media users must also assess information themselves.
  • Fact-checks may not reach people who were initially convinced by misinformation, as people tend to reject or avoid information contradicting their beliefs.

Questions for the Future

Much research has been conducted on making media users more resilient to misinformation, but several questions remain unanswered:

  • What are the long-term effects of inoculation, nudging, and fact-checking? How frequently must these interventions be repeated to remain effective?
  • How can the spillover effect be prevented to ensure that interventions lower trust in misinformation without reducing trust in reliable information?

A new form of misinformation is deepfake video and audio. Recognizing deepfakes remains a major challenge. While research focuses on developing AI to detect deepfakes, studies on making media users resilient to this form of misinformation are still in their infancy.

Download the PDF-version of this bitefile

Related articles