News

AI Political Ads Are Here, and No One Knows How to Handle Them

“Fuck, I didn’t know you could do that,” one Republican admaker told VICE News.
Cameron Joseph
Washington, US
ai-political-ads-republicans-biden
Screenshot from the Republican National Committee's recent AI-generation ad. 

A Republican admaker recently met with a top Senate candidate to pitch them on hiring their firm for the upcoming election. But they think they won’t get the job—because a competitor used artificial intelligence in their presentation.

“I heard the guy who really knocked his socks off came in and presented an ad that he did using AI to generate the candidate’s voice. And the candidate thought it was so cool,” the admaker said after asking to remain anonymous to candidly discuss internal business. “I was like, ‘Fuck, I didn’t know you could do that.’”

Advertisement

AI-generated political ads are officially here. And no one—including the campaigns themselves, let alone the voters—are prepared to handle this new reality. Artificial intelligence is becoming more sophisticated by the day, and candidates and campaigns are just beginning to grapple with practical and ethical concerns over AI-produced content creeping into politics. The time when a deepfake videos which can’t be easily discerned from real events play a major role in campaigns seems not only inevitable but closer than ever.

This week marked a watershed moment.

On Tuesday, the Republican National Committee (RNC) debuted an ad entirely composed of video made by artificial intelligence—the first campaign video from a major party to do so in history.

The RNC ad depicts a dystopian hypothetical future where President Biden is reelected, banks collapse, China invades Taiwan, and San Francisco is cordoned off by the military after being overrun by immigrants, gangs and drugs. 

It’s more notable for being the first of its kind than for the quality of the ad itself.

The images at first blush look like they could be real, but on a second watch they’re clearly AI-generated. President Biden and Vice President Kamala Harris clearly have that toothy, wolfish look common in AI-generated art, and China’s hypothetical attack on Taiwan almost looks like a puppet set from “Team America: World Police.”

Advertisement

Most of the ad’s footage could have just as easily been replaced by b-roll video from actual events. And to their credit, the RNC explicitly released the ad as an AI-generated video, and labels it as such on the video itself.

An RNC source familiar with the ad’s production told VICE News that while the video clips were AI-generated, the ad’s script was written by actual humans. When asked if the RNC would promise not to use deepfakes of politicians, the source said, “We would never produce content that was meant to deceive viewers.”

But the video’s release, even if it’s a gimmick, marks the beginning of a new era where it could become even harder for voters to discern truth from lies.

The biggest worry isn’t politicians or official campaigns airing ads with fake content but anonymous, online randos with no accountability or moral scruples pushing out viral deepfake videos.

“The concern is when we get to the point where it can be done down at the grassroots level. There are tools that are out there where they could generate this stuff en masse in an automated way,” said Dave Doermann, the director of  University at Buffalo Artificial Intelligence Institute. “We're not going to be able to detect it in real time fast enough that it makes any difference, [and] even if we could, the social media sites aren't going to be the ones that are putting the effort into taking it down.”

Advertisement

In late February, far-right commentator Jack Posobiec, a close ally of former President Donald Trump who played a key role in spreading the false Pizzagate conspiracy theory that was a forerunner for QAnon, posted a deepfake video that showed President Biden declaring a military draft to fight Russia in Ukraine that he billed as “a sneak preview of things to come.” The event was obviously a hypothetical (and unrealistic) future, but the video was good enough to set off alarm bells.

His response when fact-checkers, mainstream media outlets and some fellow conservatives asked why he’d released the video and why he didn’t clearly label it an AI-generated deepfake: “Screw all of them.”

And while publicly available AI technology isn’t quite able yet to produce completely believable video deepfakes, it’s already sophisticated enough to produce realistic audio clips.

In the final days of the Chicago’s fraught 2023 mayoral election, an anonymous Twitter account misleadingly labeled like a news website, “Chicago Lakefront News,” posted AI deepfake audio on twitter of an AI deepfake of candidate Paul Vallas saying, “Bback in my day, cops would kill 17 or 18 people and nobody would blink an eye.” The clip was played thousands of times before it was taken down—and people sniffed out that it was a fake so quickly only because his comment was a little too wild to be believed. A better deepfake script could have sailed by unnoticed for longer.

Advertisement

AI-generated video ads are new in politics, but deepfake technology has also been used in attempted propaganda efforts abroad in recent months. Last year, an AI-generated video was released that showed Ukraine’s president calling for soldiers to lay down their arms and surrender to Russia. China has similarly used deepfake technology to create videos that feature Western news anchors pushing pro-China propaganda. Russian online propagandists have targeted U.S. and European elections for years, likely swaying the 2016 presidential election. Deepfake videos will likely become a propaganda tool for foreign actors as well.

Selectively edited video clips are already commonplace in political ads and are often called out if they’re egregious enough. Campaigns and committees who have gotten caught altering images in ads have been widely mocked. When photoshop became a thing a decade ago, unscrupulous campaigns tried to use it for misleading imagery—until some examples backfired spectacularly. That happened when a GOP ad photoshopped Democratic Sen. Jon Tester’s head onto a different body, forgetting that he is missing three fingers in one hand from a childhood farm accident, for instance. It still happens but much more rarely.

There’s another downside: Once people grow accustomed to fake AI-generated videos, they’ll become even more hardened, cynical, and harder to reach and convince—for politicians, admakers, journalists, everyone. Partisans already reject facts that challenge their worldview, and the cries of “fake news” will only get louder as this technology becomes more prevalent.

Advertisement

"The biggest hurdle to effective political advertising is credibility. With AI technology becoming more common in our lives, voter skepticism will only continue to grow.  This increases the burden on political media firms to find additional ways to make the ads credible,” said Democratic admaker Jon Vogel.

And it will make people less trusting of gaffes as well—the next time something like Trump’s “grab them by the pussy” video, Hillary Clinton’s “basket of deplorables” remark or Mitt Romney’s “47 percent” comment dribbles out from a private event, a segment of the population will simply dismiss it as a deepfake.

Campaign strategists in both parties told VICE News that they doubt that major candidates and the party committees will be willing to risk including fake, AI-generated videos in major TV campaign ads. The chance of being called out for lying and losing trust with voters simply isn’t worth it. They think that AI technology will be most often used for more mundane tasks like writing press releases and simple campaign ad scripts, which are often cookie-cutter enough that ChatGPT may be able to spit them out faster than inexperienced low-level staffers.

Strategists will need to walk a fine line between time-saving work and accidentally adding misleading fake imagery when producing a campaign ads.

Advertisement

The GOP admaker said that just days ago they flagged to colleagues that one of the images in an ad they were producing was out of focus.

 “The editor I’m working with told me ‘don’t worry, I’ll just generate something with AI,’” they said.

But harder-to-track online ads and direct text campaigns are already more susceptible to false information and could be ripe targets for deepfakes.

And the biggest risk from the campaigns themselves is in the closing days of a race. That’s when ratfuckingand campaign sabotage most often occurbecause campaigns know that voters may not hear about their dirty tricks until after the election.

“One could see an ad in the last five days or something that's extremely egregious,” warned the GOP admaker.

Want the best of VICE News straight to your inbox? Sign up here.