Skip to navigationSkip to content

There’s now a computer program that lets you control someone else’s face

YouTube/Matthias Niessner
Look at their faces, then look at the screens.
Published Last updated This article is more than 2 years old.

In the prophetic 1997 film “Face/Off,” John Travolta figures that the only way to avoid going back to prison was to surgically replace his face with that of Nicolas Cage’s. Although medical science has not really gotten to the point where that isn’t still completely laughable, a new technology out of Stanford University is getting us closer.

Researchers have figured out how to make one person’s face mimic the facial expressions of another, in real-time video. The method, announced in a paper (pdf) that will appear in a special edition of the scientific journal ACM Transactions on Graphics later this year, uses a regular computer, special cameras, and some seemingly magical new software.

The research team comprises computer scientists from Stanford, the Max Planck Institute, and University of Erlangen-Nuremberg in Germany.

Their system requires a bit of set-up: A pair of cameras needs to calibrate with each new face, then renders it in digital 3D. Then the program tracks both subjects’ facial expressions using cameras that can sense depth, texture, face shape and location, and maps movements of prominent facial features like nose, mouth, and eyes, from one person’s face onto the avatar of the other. The end result: You can appear to control a friend’s face with your own.

(To make up for the fact that not everyone’s mouths are the same size, the program puts in an eerily fake set of perfect teeth to fill in any gaps.)

Real-time Expression Transfer for Facial Reenactment/Thies, Zollhöfer, Nießner et al.
It’s like Photoshop, but for video.

Matthias Niessner, one of the researchers on the project, told Quartz that the team’s main motivation was to create something that could aid multi-language videoconferences like Skype.

YouTube/Matthias Niessner
David Bowie might be interested in seeing this.

In the future, interpreters could translate someone speaking in real time, and the end user would just see the person they’re watching speak to them in their own language.

In their research paper, the team said they believe that this technology could pave the way to having photo-realistic avatars in virtual reality settings.

Niessner added that the team was also interested in applying this technology to movies, dubbing them for foreign audiences. “Most important though: It’s a crap ton of fun playing around with the system,” Niessner said.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.