Skip to navigationSkip to content

The era of easily faked, AI-generated photos is quickly emerging

  • Dave Gershgorn
By Dave Gershgorn

Artificial intelligence reporter

Published Last updated This article is more than 2 years old.

Three years ago, after an argument at a bar with some fellow artificial intelligence researchers, Ph.D student Ian Goodfellow cobbled together a new way for AI to think about creating images. The idea was simple: one algorithm tries to generate a realistic image of an object or a scene, while another algorithm tries to decide whether that image is real or fake.

The two algorithms are adversaries—each trying to beat the other in the interest of creating the final best image—and this technique, now called “generative adversarial networks” (GANs) has quickly become a cornerstone of AI research. Goodfellow is now building a group at Google dedicated to studying their use, while Facebook, Adobe, and others are figuring out how to use the technique for themselves. Uses for data generated this way span from healthcare to fake news: machines could generate their own realistic training data so private patient records don’t need to be used, while photo-realistic video could be used to falsify a presidential address.

Until this month, it seemed that GAN-generated images that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

Every image was generated by AI.

Nvidia’s results look so realistic because the company compiled a new library of 30,000 images of celebrities, which it used to train the algorithms on what people look like. Researchers found in 2012 that the amount of data that a neural network is shown is important to its accuracy—typically, the more data the better. These 30,000 images gave each algorithm enough to data to not only understand what a human face looks like, but also how details like beards and jewelry make a “believable” face.

The Nvidia GANs also shine when generating bedrooms. Previous research looked like something painted by Salvador Dali—beds melted into the floor while doorways looked twisted and warped. The Nvidia bedrooms look like something out of a catalogue.

Left and center, previous AI research. Right, images from a new Nvidia paper.

The images aren’t perfect. Some test images show women with only one earring, or a horse with a head on both sides of its body. When the system tries to generate TV monitors, it also generates cell phones and laptops. The technique also takes time—Nvidia’s paper says the networks took 20 days to train on one of its high-end GPU supercomputers.

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.