Tech

For the Love of God, Not Everything Is a Deepfake

Joe Biden altered image.

Three years after deepfakes came into the world, we’ve yet to see the algorithmic face-swapping technology put to use in any convincing ways in US politics. When Motherboard uncovered people generating fake, face-swapped videos to make non-consensual pornography in 2017, experts worried that someone could make a video of a world leader that would set off a nuclear war or topple democracy.

That hasn’t happened. The most common use for deepfakes continues to be to take ownership of women’s bodies in non-consensual porn; in some cases, the tech is being used in “funny” advertisements. By-and-large, people are pretty good at recognizing deepfakes, in part because of incremental reporting by journalists and artificial intelligence researchers working hard to make sure people are informed about what they see online, and in part because many deepfakes simply aren’t very convincing.

Videos by VICE

But political commentators are still so eager to find an example of deepfakes being used to sow misinformation that they’ll call any edited video of a politician a deepfake. Usually, the intention of the video doesn’t matter; to them, the fact that a fake video exists at all is enough to suggest that we’re facing some sort of disinformation crisis.

Late Sunday night, the Twitter account @SilERabbit posted a gif of Joe Biden seemingly sticking his tongue out and wiggling his eyebrows. “You can tell it’s a deep fake because Jill Biden isn’t covering for him,” the tweet said.

The official account of president Donald Trump retweeted that tweet, and the Atlantic‘s David Frum was on the case. “It’s an important milestone that the first deployment of Deep Fake technology for electioneering purposes in a major Western democracy is by … the president of the United States,” Frum tweeted.

This isn’t a deepfake. Deepfakes don’t stretch and drag facial features like that. As the watermark shows, this video was made using Muglife, which animates images according to how you distort them with a finger—kind of like stretching and pulling an image in Photoshop, but rendered in gif form.

Frum didn’t just tweet about the video, he went and turned it into a blog post for the Atlantic, warning about the dangerous precedent set by Trump sharing a video that has been altered to show his competitor… with a cartoonish tongue.

“The late April retweet is another step, and a big one. Instead of sharing deceptively edited video—as Trump and his allies have often done before—on April 26 Trump for the first time shared a video that had been outrightly fabricated,” Frum wrote. “If it works, it may happen again—and in more cunning forms, closer to voting day.”

What’s left unexplored and unexplained here is what this is a “step” toward, and what it would mean for this video to “work.” Trump is a horrible tweeter, but this is no different than countless other tweets Trump has spread, and it’s unclear what sort of message a gif of Biden with cartoon eyes and a stuck out tongue is going to convey.

Ironically, the one who is unable to distinguish the truth here is Frum, who is calling an image manipulated on a silly app a “deepfake.” (He also misspells “deepfake” throughout as two words.) It’s worth being pedantic, and differentiating between deepfakes and other manipulated media. Malicious deepfakes are made with the intent to deceive—a distinction many mainstream social media platforms now make in their terms of use—but are also made using algorithms and machine learning, not puppetered by a finger on an iPhone.

By calling this non-deepfake gif a serious threat to democracy—a line used by panic-mongering politicians, talking heads, and opportunistic startups for years—he’s able to avoid addressing the real, much more complicated issues surrounding deepfakes: consent, bodily autonomy and bias reinforcement online.

In the piece, Frum also mused on why the image is allowed to stay up on Twitter, despite the platform’s policy against malicious content manipulation. It must be because the original tweet labels it as a deepfake, he reasoned. But it’s allowed not because it’s already labeled by the original poster. Twitter’s policy covers a wide range of manipulated media, including selected editing, cropping, changing the video speed, overdubbing, and manipulation of subtitles.

It’s up because the MugLife gif of Biden has none of the qualities that Twitter would consider a breach of its terms of service, and clearly isn’t intended to “deceive” anyone: “You may not deceptively share synthetic or manipulated media that are likely to cause harm,” according to Twitter’s rules. It’s clear it’s fake, and it’s clear it’s satire, so it stays up.

Again, this is not a deepfake. It’s a video editing job that a six year old with an iPhone could do. But the Joe Biden manipulated image does have something in common with other manipulated videos that aren’t deepfakes, like the slowed-down video of Nancy Pelosi last year. It projects whatever you want it to project. Like most political satire, and most of Twitter these days, what you see reflects and reinforces what you already believe. If you see the Biden tongue-out gif and like the guy, it’s a juvenile joke by his detractors. If you see it and believe the women who say Biden is a sexual assailant, it’s a gross rape joke at best. If you, like Trump, see him as an opponent, it’s a silly jab.

Political satire has existed for hundreds of years. It’s basically a foundation of American democracy itself. Gifs like this won’t be democracy’s undoing—we have bigger problems than that.