News

It’s Official: Facebook’s Fact-Checking Is Making Its Fake News Problem Even Worse

facebook fake news

Want the best of VICE News straight to your inbox? Sign up here.

Four years after Facebook decided it needed to do something something to fix its fake news problems, we now know those efforts only made things worse.

Videos by VICE

As the world was still coming to terms with President Donald Trump’s victory in the 2016 election, Facebook rolled out a program that December for independent fact-checkers to flag questionable content as “disputed.” But a new study out of MIT found that people assume that if some articles have warnings, those that don’t must be accurate.

“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” David Rand, one of the authors of the report and a professor at the MIT Sloan School of Management, said in a statement.

Rand and his co-authors have called it the “implied truth effect” and because of the sheer scale of Facebook’s platform — which has 1.25 billion daily users — and the amount of content posted on it, fact-checkers simply can’t keep up.

“There’s no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem, because of the implied truth effect,” Rand adds.

Facebook did not respond to questions about the study’s findings.

The MIT team conducted studies with more than 6,000 participants, who were shown a variety of real and fake news headlines as they would look on Facebook.

READ: Mark Zuckerberg is literally begging Europe to regulate Facebook: ‘It will be better for everyone’

One half of the group was shown stories without fact-checking tags of any sort. The other half was shown a typical Facebook feed comprising a mixture of marked and unmarked posts.

Then people were asked if they believed the headlines were accurate.

The results found that the use of “false” tags that are used to flag inaccurate content did significantly reduce participants’ willingness to share fake stories from 29.2% to 16.1%. But, the study also found that unmarked false stories were believed to be true and shared 36.2% of the time.

Over one-fifth of the study’s participants said they believed that the unmarked posts had already been fact-checked.

Rand, an unpaid advisor to Facebook’s efforts to combat fake news, says that one obvious fix is to simply employ more fact-checkers so that all news content posted to Facebook is checked. And, he says, ordinary Facebook users could be recruited to do this work.

READ: So Facebook cleared things up for 2020: politicians can totally lie to users

While many people assume that allowing Facebook users to do this work would make the situation much worse, given the partisan nature of content already on the site, Rand’s research has found the opposite.

In fact, this is something Facebook is already considering. In December it announced it would be hiring some part-time “community reviewers” who will help corroborate or debunk content flagged by the company’s automated systems.

Facebook wouldn’t say at the time how many reviewers it would hire or how much content they would be reviewing.

Another solution is simply applying more labels.

“If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem because there’s no longer any ambiguity,” Rand said. “If you see a story without a label, you know it simply hasn’t been checked.”

Cover: The Facebook logo on the display of a smartphone. Photo by: Soeren Stache/picture-alliance/dpa/AP Images