Tech

Hackers Fool Facial Recognition Into Thinking I’m Mark Zuckerberg

A modified picture of the author wearing a blue sweater, with search results yielding images of Mark Zuckerberg

An Israeli artificial intelligence company says it has developed a new technique that tricks facial recognition systems by adding noise to photos.

Adversa AI’s technique, announced this week, is designed to fool facial recognition algorithms into identifying a picture of one person’s face as that of someone else by adding minute alterations, or noise, to the original image. The noise tricks the algorithms but is subtle enough that the original image appears normal to the naked eye.

Videos by VICE

The company announced the technique on its website with a demonstration video showing that it could alter an image of CEO Alex Polyakov into fooling PimEyes, a publicly available facial recognition search engine, into misidentifying his face as that of Elon Musk. 

To test this, I sent a photo of myself to the researchers, who ran it through their system and sent it back to me. I uploaded it to PimEyes, and now PimEyes thinks I’m Mark Zuckerberg.

Adversarial attacks against facial recognition systems have been improving for years, as have the defenses against them. But there are several factors that distinguish Adversa AI’s attack, which the company has nicknamed Adversarial Octopus because it is “adaptable,” “stealthy,” and “precise.”

Other methods are “just hiding you, they’re not changing you to somebody else,” Polyakov told Motherboard.

And rather than adding noise to the image data on which models are trained in order to subvert that training—known as a poisoning attack—this technique involves altering the image that will be input into the facial recognition system and doesn’t require inside knowledge of how that system was trained. 

Adversarial Octopus is a “black box,” Polyakov said, meaning even its creators don’t understand the exact logic behind how the neural networks that alter the images achieve their goal.

Adversa AI has not yet published peer reviewed research explaining Adversarial Octopus. Polyakov said the company plans to publish after it completes the responsible disclosure process: informing facial recognition companies about the vulnerability and how to defend against it.

It’s not the first time researchers have created methods for subverting computer vision systems. Last year, researchers at the University of Chicago released Fawkes, a publicly available privacy tool designed to defeat facial recognition. Shawn Shan, a PhD student and co-creator of Fawkes, told Motherboard that, based on the information Adversa AI has made public, its technique seems feasible for defeating publicly available recognition systems. State-of-the-art systems may prove harder, he said. 

The field is constantly evolving as privacy-minded researchers and purveyors of facial recognition compete in a cat-and mouse-game to find and fix exploits. 

The Adversarial Octopus technique could theoretically be put to nefarious uses, such as fooling an online identity verification system that relies on facial recognition in order to commit fraud. It could also be used by “hacktivists” to preserve some of their privacy while still maintaining social media profiles, Polyakov said.

But despite the rapid advances in adversarial attacks, the threats remain largely theoretical at this point.

“We’ve never seen any attack, basically” that has deployed advanced techniques like Adversarial Octopus or Fawkes to commit fraud or other crimes, Shan said. “There’s not too much incentive for now. There are other easier ways to avoid online ID verification.”