Tech

This Company Promises to Place Any Face Onto Any Body, Using an Algorithm

​Screenshot via Rosebud.ai

In September, we saw the launch of Generated Photos, a collection of 100,000 images of AI-generated faces for use in stock images. Another company, Rosebud AI, is now taking that concept a step further, with faces that aren’t just part of a static, stock database, but customizable and personalizable. Users will be able algorithmically place any face onto any body in their collection.

Maybe you’re thinking, another AI face generator? Yes, another AI face generator. But this time, you’ll be able to upload any face into a system that places it onto another person’s stock-image body.

Videos by VICE

Rosebud AI, a San Francisco-based synthetic media company, launched Generative.Photos this week with a Product Hunt page and demo site. The demo only uses its pre-loaded models for now, but includes placeholders for uploading your own photos and a signup for a user waitlist.

Generative.photos is a first step in our synthetic stock photo and API offering, which will eventually allow users to edit and fully synthesize visual content with an intuitive interface,” Lisha Li, the founder of Rosebud AI, wrote on Product Hunt. “We focused on bringing forth a way to diversify stock photo content since it was a need we heard voiced by stock photo users. All the faces in our 25k photo collection are not of real people.”

If this diversity line is sounding familiar, that’s because it’s also what Generated Photos claimed it was setting out to fix. Li also says it wants to give “consumers the power to choose an advertising model that they can relate too,” with more diverse models. She wrote that what makes Generative.Photos different from other attempts is the context: It’s giving a fictitious, generated face a stock body and background, and adjusting it to whatever skin color or gender an advertiser or marketer wants.

Li told Motherboard that Rosebud AI’s tool are still in closed beta. But releasing something into the world before establishing public terms of use—or considering any kind of guidelines or prevention measures for the tool’s potential for malicious use—is unfortunately not uncommon. We see it again and again with AI programs hustled out into the wild before any ethical guidelines are established, especially, like deepfakes, Deepnude, Generated Photos.

In addition, Generated Photos and Rosebud AI are allowing people to create their own realities, letting companies demonstrate artificial diversity where there actually isn’t any. Rather than real diversity, we get algorithmically generated, customizable stock images.

“It’s pretty harmful and a major oversight to launch any kind of project where users can add content to a repository and not check and verify if that content is ‘harmful’ or not,” machine learning designer Caroline Sinders, a fellow with Mozilla Foundation and the Harvard Kennedy School who studies biases in AI systems, told Motherboard. “It’s even more of an oversight and downright neglectful not to have policies that define ‘harm’ in terms of contention and actions. In 2019, this is a major issue for a company to not have these things.”

Update: Following publication, Li told Motherboard that Rosebud AI’s self-serve tool is not open yet, as it is still in closed beta, and will require users to sign a term of service that reflects a code of ethics before using the beta version of the tools.