Nancy Pelosi speaking.
Image via Getty
Tech

Americans Don’t Need Deepfakes to Believe Lies About Nancy Pelosi

President Trump tweeted a video of House speaker Nancy Pelosi that was altered to make her look and sound drunk, with the speed slowed down to 75 percent.

On Thursday, Motherboard wrote about new research out of Samsung's AI lab that makes it very easy to fake videos of people's faces, using as little as one source image as reference material. The machine learning tech brought the Mona Lisa, Fyodor Dostoevsky, and Salvador Dali to life, using just a few portraits.

The research spurred fearful and disgusted reactions from readers and others in the machine learning community. If deepfakes—algorithmically-generated face-swaps that required hundreds of training images to create a realistic fake—are scary because they're relatively easy to create, it's even scarier that a program can do the same thing, with only a handful of photos of your target.

Advertisement

Hours after I wrote about Samsung's creation, President Trump tweeted a video of House speaker Nancy Pelosi that was altered to make her look and sound drunk. The speed of the video was slowed down to 75 percent of the original, according to the Washington Post, and the pitch of Pelosi's voice was modified to make her seem like she was stumbling through a speech. The altered video earned 22,000 retweets and nearly 70,000 likes, and counting. It has been viewed more than 3.5 million times on Twitter.

Technology for manipulating videos is getting better and more accessible to the masses, and that's worrying because it's a new and powerful tool that could allow bad actors to continue to poison our already polarized politics. But bad actors don't need new and powerful technologies like deepfakes to do this. The altered Pelosi video, which was also widely distributed on Facebook and YouTube, proves that rudimentary editing and a willingness to prey on people's hate for public figures is all they need in order to successfully spread misinformation across the internet.

When Motherboard uncovered deepfakes in December 2017, the most shocking thing about it was how relatively easy it was for anyone with a decent PC and a little coding knowledge to create deepfakes at home. Special effects that were previously only attainable for high-budget Hollywood productions were suddenly available to the masses. We collectively asked ourselves: What could this mean for our perception of verifiable, photo-evidence truth? In a world of fake news, how could society cope with believable videos of people saying things they never said?

Advertisement

Since then, we've seen a plethora of headlines decrying the end of truth, democracy, and civilization as we know it. Deepfakes were lauded as a potential nuclear threat, and the government started seriously seeking countermeasures against the threat of deepfakes, with DARPA funding researchers to find new ways of detecting manipulated videos. It became a talking point for legislators to throw around. Republican Senator Ben Sasse tried to outlaw the creation and distribution of deepfakes with a bill last year; Senate Intelligence Committee vice chair Mark Warner wrote in a whitepaper that video manipulation technology is “poised to usher in an unprecedented wave of false or defamatory content”; Republican Senator Marco Rubio gave a speech at a Heritage Foundation event where he outlined his fears about the potential use of deepfakes in upcoming elections.

Read more: There Is No Tech Solution to Deepfakes

But in the last few years, there have been several instances of altered videos being widely distributed on social media, with no need for machine learning. The videos are lightly edited to paint a picture that's not altogether true and are easily spreadable. People apply their own confirmation biases as the missing context.

What we haven't seen since December 2017: Any actual evidence of algorithmically manipulated videos being used to sway politics. So far, deepfakes as a destructive force in politics is just hype, and a distraction from real issues of media literacy and algorithmically assisted virality. Deepfakes have, however, been successfully used to cause real harm to women when used to spread nonconsensual fake porn. Harassers targeted a journalist in India using deepfake porn to make it look like she was in a sex tape following her taking a stand against the rape of a young girl. Female public figures are often targeted because their images are easily accessible online; Scarlett Johansson, one of the early deepfake targets, recently called the fight against this kind of nonconsensual imagery "fruitless" and "a lost cause."

Advertisement

The Pelosi video is only the latest example of video editing and confirmation bias at work. Last year, press secretary Sarah Sanders spread a video altered by far-right conspiracy site Infowars to look like journalist Jim Acosta assaulted a White House aide. A video made by a conservative news outlet cut together an interview of Democratic congresswoman Alexandria Ocasio-Cortez and one of their hosts to make it seem like she gave dumb answers to questions, which got more than a million views.

During that 2018 event, Rubio mostly sounded fearful about how fake videos could impact elections. But he did hit on one point that rings truer than other panicked takes around deepfakes: "The vast majority of people watching [an] image on television are going to believe it."

The real threat facing society when it comes to video manipulation isn't widely-accessible technology or the ability to alter someone's image in convincing ways. It's that people are gullible, and it takes hardly any editing at all to make someone believe what they want to believe.

As the Verge wrote in March about a fake report that spread on Facebook claiming the Pope endorsed Trump: "It wasn’t damaging because it was convincing; people just wanted to believe it." If you already believe something, fake news only reinforces it.

“It is striking that such a simple manipulation can be so effective and believable to some,” Hany Farid, a digital-forensics expert and professor at University of California, Berkeley, told the Washington Post. “While I think that deepfake technology poses a real threat, this type of low-tech fake shows that there is a larger threat of misinformation campaigns—too many of us are willing to believe the worst in people that we disagree with.”

Bad actors don't need to learn to code to make us buy what we already want to see. And that's a lot more dangerous than an eerie painting come to life.