Attendees at SIGGRAPH 2017 were treated to a look at the future of real-time digital human rendering. SIGGRAPH’S VR Village hosted an experience that featured interviews by a digital avatar being “driven” in real-time (yes, like in Ready Player One) by the human Mike, who wore a special rig that captured his motions and expressions.

A staggering nine high-end graphics computers ran synchronized versions of Unreal Engine to create Mike’s digital avatar rendering in the stunning high-definition images.

The sessions were conducted with industry leaders from Pixar, Disney Research, Epic Games, Fox Studios and rendered inside a virtual studio. Participants with VR headsets could even watch the interviews from inside the virtual studio; the rest were content to watch the real-time rendering unfold on a standard flat screen.

“Each day of the trade show, digital Mike is meeting digital versions of industry legends and leading researchers from around the world. Together they conduct interviews in “Sydney” via virtual reality which can be watched either in VR or on a giant screen. The project is a key part of new project into virtual humans as Actors, Agents and Avatars,” Seymour wrote on VFX news fxguide.com.

In the photos above, the “real” Mike is wearing a headgear rig with several video and motion capture cameras pointed at his face to collect and then feed the input of his facial expressions and movements over to a virtual simulation that contains, essentially, an avatar of himself. Below are the renderings of digital Mike.

A lot of different moving parts make this work, from the Technoprops headgear Mike is wearing to the facial of himself — the “avatar” — that was made as part of the Wikihuman project at USC ICT. Disney Zurich Research produced the eye scanning data that enabled live, real-time rendered digital face representation.

The facial rig itself was developed by 3Lateral, one of the leading 3D character and creature design studios, and the artists behind the facial rigging for titles like Ryse: Son of Rome, Ashes to Ashes, and Grand Theft Auto V. (You can learn more about facial rigging here.) The simulation itself is rendered at 90 FPS in VR using Epic’s Unreal Engine in concert with special HTC VIVE custom hardware — in other words, identical to any other experience in VR, with the exception of the nine-machine setup. Mike’s avatar was scanned and rigged in advance, whereas his guest’s avatars were generated from a single still photograph using AI algorithms.

The MEETMIKE experience received positive reviews from SIGGRAPH 2017 audiences.

“We are proud to be able to help bring this mix of Virtual Production, Digital Humans and Virtual Reality to SIGGRAPH,” commented Mike Seymour. “The aim is to research responses to digital human’s which will someday enable virtual assistants to be deployed in contexts from health care, aged care, education and of course, entertainment.”

Technical facts:

  • MEETMIKE has about 440,000 triangles being rendered in real time, which means rendering of VR stereo about every 9 milliseconds, of those 75% are used for the hair.
  • Mike’s face rig uses about 80 joints, mostly for the movement of the hair and facial hair.
  • For the face mesh, there is only about 10 joints used- these are for jaw, eyes and the tongue, in order to add more an arc motion.
  • These are in combination with around 750 blendshapes in the final version of the head.
  • The system uses complex traditional software design and three deep learning AI engines.
  • MIKE’s face uses a state of the art Technoprop’s Stereo head rig with IR computer vision cameras.
  • The University research studies into acceptance aim to be published at future ACM conferences.

Read MEETMIKE’s submission to SIGGRAPH, full text and abstract here.