News stories should come with warning labels, study finds

An MIT study found that warning labels on news stories posted online had the effect of users believing that everything which wasn’t labeled false was true.

ADVERTISEMENT
Image
Libby Emmons Brooklyn NY
ADVERTISEMENT

An MIT study found that warning labels on news stories posted online had the effect of users believing that everything which wasn’t labeled false was true. Warning labels on stories came into being after the 2016 election as a result of progressives feeling that Trump was only elected because the populace was so stupid that they believed the misinformation they saw in online fake news. They wanted the scourge to be stopped. Warning labels have emerged as one of the answers. Finally! A way to cleanse the rubes of their predilection for falsehoods. But as it turns out, warning labels on news content are not super effective.

The MIT study fed news consumers a selection of stories that either featured no warning labels, only false warning labels on some of the stories, or had a mix of true and false labels. When faced with the experiment where there were only false labels, something called the “implied truth effect” came to bear, meaning that readers assumed that anything not tagged false was true.

When there were no labels, users were 29.8 percent likely to share false stories. This dropped to 16.1 percent false story shares when there was a false label attached. That’s a drop of 13.7 percent, sure, but it didn’t eliminate the problem entirely. Even with news stories blatantly tagged as false, users shared them. Once the “implied truth effect” kicked in, where readers assumed that anything that wasn’t labeled false was necessarily true, they shared 36.2 percent of the stories that were not labeled, simply assuming that they were true.

Researchers somehow concluded that this means there should be more labels on content not less.

“If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem,” says Professor David Rand, who co-authored the study, “because there’s no longer any ambiguity. If you see a story without a label, you know it simply hasn’t been checked.”

Where he sees a decline in ambiguity, I see a sharp increase.

While some users were likely to share false stories without warning labels, there were some users who were also likely to share false stories that were tagged false. This is primarily a result of a person believing a story to be true despite it having been labeled by a nameless, faceless, expert as false. The idea is that a person will not trust the source of the label more than they will trust their own understanding, their own bias.

While the researchers find fault with that, I think that is a positive. When faced with a random, faceless, expert’s opinion on the truth or falsehood of a news story, users will make their own determination. If anything, what this study tells us is not that users are unduly influenced by their own bias, but that any combination of labels, whether a smattering of true and false, or only false, create skewed results. Best to take our chances with readers and their unquantifiable biases than try to control their behaviour and perspectives by applying content warnings at all.

Twitter came up with the not-so-brilliant plan to use verified accounts to confirm data, and then to give scores to those accounts based on how often they got the labeling correct, which correctness would be determined by how many of the verified verifiers verified the information as correct. Instead of advocating for people to think for themselves and not give into a herd mentality, people would be shepherded by blue check marks into all thinking the same thing. But verified verifiers are not going to be right all the time, and giving them the key to the kingdom of veracity only to have them jam the lock now and then will cripple whatever effectiveness the platforms thought they would have in the first place.

Additionally, no social media platform or collection of researchers have given any insight as to how these experts would be selected in a way that is bias-proof. If there’s anything social media companies have shown us through their bans and suspensions, it’s that they don’t know how to tell what’s real any more than these fake news sharing users do.

Neither social media fact checkers nor machine learning algorithms should be verifying information for us. Instead, individuals should be prepared to do the heavy lifting of discerning truth from fiction. We do not need experts to quantify reality for us.

ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information