Tech

​Sorry Supermodels, Runway Avatars Never Have to Eat

Supermodels are the latest name on the list of professions threatened to be replaced by technology; specifically, motion capture and 3D animation that when merged together can create lifelike digital avatars that mimic natural human movement in real-time.

And, unlike high-maintenance human supermodels that have to eat and sleep, these next-gen animated avatars “never tire or get hungry,” said researchers from Manchester Metropolitan University in the UK, where the software is being developed.

Videos by VICE

I’m not sure if this is a promising or horrifying thing. Sure, it could relieve young women from the burden of maintaining a size 0 at five-foot-ten without passing out on the catwalk. But creating fashion icons that aren’t even limited by natural human restraints seems like a dangerous step for an industry already so far removed from reality.

Anyway, that’s all just speculation for now; what the researchers are actually trying to do is reconstruct natural body movement as closely as possible in the digital realm, to “produce and animate a realistic virtual human,” says the study, which was recently published in the International Journal of Fashion Design, Technology and Education.

It’s basically a three-step process. One, Hollywood-style motion capture body scanners recorded data on body measurements and movements. (Body-scanning is already commonly used in the fashion industry for to-the-inch personalized clothing design.)

Images taken from the 3D body scanner. Image: MMU

Two, infrared depth sensing technology, which works much the same way as a Microsoft Kinect, creates a 3D virtual avatar of the model, down to a UV map of the skin on the face and a skeletal map that places the body’s bones and joints in accurate position.


An avatar from the 3D body scan is merged with the motion capture ‘skeleton.’ Image: MMU

Three, the image is animated (and here’s the kicker) based on the previously recorded motion data and physiological placement. In other words, the avatar’s animated movements are authentic to that person’s body structure, making for more natural and realistic motion.

These “near-faultless copies of people” can then be “digitally dressed in the latest haute couture and their movements recorded, ready to instantly appear in Milan or Paris,” explained the university news release.

The next step for researchers is to make the process nearly instantaneous, to produce avatars ready for real-time performance. That means you could project a virtual outfit in real-time on a human model, or alternatively, use digital avatars instead of humans, enabling live performances on mixed reality catwalks around the world.

Augmented reality is already used to project holograms of famous models on the runway, but these digital copies of humans could potentially replace models altogether, researchers say. Though I’m still skeptical as to just how lifelike the avatars are, and unfortunately there’s no video of the project with which to judge, only the below image, described in the study as “the results of the motion data driven avatar.”

The technology was originally developed to map the movements of ballet dancers for coaching and to augment performance. Image: MMU

That said, motion capture tech has been used to make virtual runway models before, albeit not in real time and nowhere near the same level of detail. For instance, back in 2009, Vista Animations, an animation override company that lets users personalize the movement of their Second Life avatar, introduced a “fashion runway” feature to join custom movements like “hand swords” or “DJ animations.”

Though a somewhat different beast (it’s totally immersive VR, rather than animated avatars based on real-time body scan data) the video gives a glimpse into the frightening future of high fashion in the digital metaverse.