30 August 2021 01:00
There are many aspects to the fight against disinformation. One problem that fact-checkers and researchers are grappling with is how best to label false information. After a piece of content has been verified or fact-checked, all that remains is to attach a warning so that readers can choose for themselves what actions to take. However, people will have different reactions depending on how content is flagged. Too subtle and the warning will be overlooked, too loud and you run the risk of irritating people. Marking content as “false” or “inaccurate” also carries repercussions, as it can raise or lower users’ scepticism towards online news in general.
In 2018 Pennycook, Canon and Rand conducted a study into fake news and the illusory truth effect. The illusory truth effect is a phenomenon whereby the repetition of a falsehood can increase its believability. Part of their research found that the type of warning label used by Facebook – e.g. “Disputed by 3rd party fact-checkers”, does not necessarily change the perceived truthfulness of a particular headline, but can increase overall scepticism. Based on their results, the authors conclude that the ideal solution would be to “prevent people from ever seeing fake news in the first place, rather than qualifiers aimed at making people discount the fake news that they do see.”
In a follow-up study, Pennycook, Bear, Collins and Rand examined one of the unintended consequences of warning labels, which they call the implied truth effect. They hold that posting warnings on some content leads to the assumption that articles without warnings are factual. While fact-checkers usually only apply warning tags to content that they can see is blatantly false, other types of disinformation (such as content that takes real events and spins them in a misleading way) go unchecked, and are therefore given a sense of implied truth. They suggest that perhaps a better method would be to instead add a verification tag to factual information. This would remove the ambiguity about whether or not untagged headlines had been verified while also eliminating, and even slightly reversing, the implied truth effect.
The content of a warning is only part of the equation. Placement also has a huge impact on how effectively we can combat disinformation. In their conference paper Adapting Security Warnings to Counter Online Disinformation, Kaiser et al. studied the differences in the effectiveness of both contextual and interstitial warnings. Contextual warnings appear alongside a post whereas interstitial warnings obscure it, requiring the user to engage with the warning before seeing the post.
Their research found that 65% of those tested failed to notice contextual warnings, and that the click-through rate for posts flagged in this way was still quite high. In contrast, they found that an interstitial warning was noticed by all participants and received a very low click-through rate. While some users expressed distaste for being told what content they should and shouldn’t be looking at, the majority of participants agreed that an interstitial warning would be more effective because it interrupts workflow and requires specific user input to proceed. Furthermore, it’s much harder to make a contextual warning stand out, particularly in a list of search results, while an interstitial warning requires interaction and is impossible to ignore.
While the studies might disagree on certain points, there is a consensus that consistency is one of the most important aspects of countering disinformation. Given that there are so many outlets for people to pass off lies as facts, perhaps the most realistic solution is to signal which sites are trusted rather than those that aren’t. Where warning tags are used, it’s important that they are obvious and, ideally, interrupt a user’s workflow before they can keep reading.
These insights are being applied by the Provenance project. Provenance is developing a set of automated warning labels for online content. Although Provenance is not fact-checking content, the labels will provide essential context about a post including the quality and emotional tone of the writing, the similarity of text and visuals to other sources, and whether the visual content has been manipulated in some way. The next step is to test the effectiveness of these labels, including whether users notice the warnings and whether there are any unintended consequences.
Written by Alex Conroy
© 2019 Provenance | The PROVENANCE Action Management Team, Dublin City University, Glasnevin, Dublin 9, Ireland | Portal
We use necessary cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. We won't set these optional cookies unless you enable them. Using this tool will set a cookie on your device to remember your preferences.
Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.
We'd like to set Google Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.
You can change your cookie settings at any time by relaunching this tool. this tool can be relaunched from the cookie settings link in the footer or by licking on the cookie symbol on any page of the website.