On Tuesday, Motherboard reported that a group of artists and machine learning engineers posted a deepfake of Mark Zuckerberg to Instagram, making it look like he gave an ominous speech about the power the social network gets from collecting user data.
According to Facebook, the video was flagged by two of its fact checking partners, which prompted Facebook to limit its distribution on its platforms. This process suggests that Facebook has the ability to mitigate the virality of a doctored video that aims to spread misinformation, at least once it’s highlighted by a news publication.
Videos by VICE
But the Zuckerberg deepfake is not part of a malicious misinformation campaign. It’s art criticizing the CEO of one of the most influential companies in the world, and now that company appears to be suppressing distribution of that work. It raises a complicated question: How is Facebook supposed to fact-check art?
Bill Posters, one of the artists who created the Zuckerberg deepfake, told me in an email that he is “deeply concerned” about Facebook’s decision to downrank his art, saying that it sets a dangerous precedent for other artists who want to critique or challenge systems of power. He’s posted another Zuckerberg deepfake on Instagram, to protest the first one being labeled as false.
“This is the point we are trying to make,” Posters said. “How can we engage in serious exploration and debate about these incredibly important issues if we can’t use art to critically interrogate the tech giants?”
On Thursday, the House intelligence committee held a hearing on deepfakes as a potential national security threat, but the problem of moderating deepfakes as art was also brought up by one of the expert speakers.
“The value of a fake could be profound,” Danielle Citron, professor of law at the University of Maryland, said. “It could be that the deepfake contributes to art—in Star Wars, we had Carrie Fisher coming back—there’s a lot of value in deepfakes. […] All of this is so contextual, so I don’t think we can have a one-size-fits-all rule for synthetic video.”
The process of downranking the Zuckerberg deepfake started hours after Motherboard first reported it, when one of Facebook’s fact-checking partners, Lead Stories, flagged the video. We know this because Lead Stories emailed Motherboard, unprompted, the same day Motherboard published its story. (The incident is in turn a reminder that Facebook, a company that has struggled to moderate the speech of its billions of users, has outsourced much of its fake news problem to third-party fact checkers that users often don’t know about.)
“We flagged the original video as satire so it gets a warning label but no reduction in reach. Future copies we spot that omit the context will get tagged ‘false’ and will see their reach reduced,” Maarten Schenk, editor of Lead Stories and the developer of its online trend-detection software Trendolizer, told Motherboard. “And with that we wrote our first fact check that is now part of an art project I believe… In our judgement on Facebook it deserves a warning label to avoid misunderstandings but it does not need to have its distribution reduced.”
Instagram reached a conclusion that seems to conflict with, or at least differ from, how the video is handled on Facebook. A spokesperson for Instagram told Motherboard that Politifact marked a YouTube link of the deepfake as false, and if that specific link is shared to Facebook, the platform will downrank it in users’ news feeds. On Instagram, the company said that the satire rating—referring to Lead Stories’ decision—doesn’t result in filtering from Explore and hashtags, they said.
“We will treat this content the same way we treat all misinformation on Instagram. If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages,” Instagram’s spokesperson initially said about the post, on Tuesday.
“Context matters and (contrary to the [Nancy] Pelosi video) the Instagram post is part of an art project with a social message, and it is open about the real story behind the video,” Schenk said.
He said that Lead Stories found one copy of the video without context on Facebook (Motherboard was able to find several more), and that it did not flag news reports about the fake video because they provided the appropriate context to make it clear the video is fake. Lead Stories left the news reports alone and flagged the art project copy as “satire,” calling it a “valid part of the public debate and the creators were open about what they were doing.”
The manipulated video of Pelosi that went viral on Facebook last week is an example of platforms allowing people to express themselves and limiting the spread of misinformation. To remove or not remove the doctored Pelosi video became a catch-22 scenario for the platform, as Motherboard’s Caroline Haskins wrote:
By removing the Nancy Pelosi video, Facebook wouldn’t just be defining where satire becomes misinformation in that situation. The company would also be crafting a precedent that could be applied to other situations—situations where the target may not be Nancy Pelosi, but Trump, or figures scrutinized by the left or in other countries, in other contexts.
Facebook’s not alone in struggling to figure out how to moderate misinformation and satire. Earlier this week, Twitter locked out the satirical account @TheTweetofGod for “hateful conduct,” possibly because a moderator judged the tweet outside of its satirical context. The account has since been restored.
“All of these companies are kind of groping in the dark when it comes to what policies they need overall, because it’s a really hard problem,” Jack Clark, policy director at OpenAI, said in the House intelligence hearing. As long as Facebook continues to try to moderate its two billion users, it’ll face the impossible task of attempting to read two billion minds: the intentions behind each post, and the ways viewers perceive them.