The hottest trend in AI is perfect for creating fake media

Real talk on fake news.
Real talk on fake news.
Image: AP Photo/Brynn Anderson
We may earn a commission from links on this page.

Artificial intelligence researchers have a new best friend: the “generative adversarial network.” But the flip side of this technology, which can help us enhance images and train medical algorithms, is that GANs will make hoaxes, doctored video, and forged voice clips easier to execute than ever before. 

At a basic level, a GAN is two neural networks trying to trick each other. But to understand why this technology is so applicable to “fake news,” or faked media of any kind, you have to know how it works. Let’s break it down in reverse order, which is counter-intuitive but will make more sense in the long run.

“Networks”

Two neural networks pitted against each other—what does that mean?

Neural networks take data and break it into tiny pieces, then calculate the relationships between those pieces to understand the data. That might be confusing, but it’s basically the idea that allows a machine to look at two pictures of dogs and discern that they are different individual animals, but both dogs. Think of it like memorizing the mathematic formula for the idea of a dog: four protruding structures for legs, plus triangular ears, plus a shape like a snout, plus a tail, plus fur. Equals a dog.

One neural network is taking these formulas and applying them to generate what it thinks the answer for “dog” is, based on all the dogs its creator has shown it in the past. The second network has also been trained on real dogs; it judges whether what the first network came up with is a real dog or not. That second network doesn’t know that a fake dog is fake, or that the whole system is built to create fake-dog images. It’s just determining whether what it sees is a real dog, by its understanding, or not.

“Adversarial”

This is the key word to the whole system: A GAN is good at generating convincing media because it has quality control built in.

Those two neural networks? The first is a generator, the second a discriminator. You could also call the generator the actor and the discriminator the critic. The actor tries to make something that looks real, and and the critic determines whether it’s good enough to pass as real. So they’re adversaries of sorts, though the actor can learn from the critic, and get better and better over time.

“Generative”

Typically, the goal of GANs is to generate something. It could be an audio clip of a voice, or a video, or still images. The first paper describing GANs came out in 2014, and mainly focused on images. Another early paper showcasing the technology’s full potential was part of a 2015 project, whose GAN could generate all kinds of different images of bedrooms.

Today, researchers can make photorealistic images of fake celebrities by training the networks on low-resolution pictures of celebrities on red carpets, and then slowly training on higher and higher quality images.

Nvidia’s GAN research.
Nvidia’s GAN research.
Image: Nvidia

The problem

So neural networks take images, break them into bits, and create a formula that can reconstruct a new image. With GANs, the generator network tries to fool the discriminator network, i.e. a machine trained specifically to spot bullshit.

This technology can be used for good, like helping generate images of what clothes would look like on a person shopping online (the critic makes sure a shirt pattern isn’t accidentally generated onto the model’s arms, for example). But it can also be used for misinformation.

For all their ability, GANs tend to be worse than humans at calling BS when something looks fake, since they don’t have any context or real experience with the physical world that’s being portrayed. But when a platform like Facebook, Google, or Tumblr needs to police fake content, their sheer scale requires automation (there aren’t enough humans in the world to call BS that often). The pitfall is clear: A neural network built to fool a neural network will likely fool whatever algorithm a platform can throw at it.

“It was already able to fool the discriminator… so you need to bring some new technology,” says Or Levy, CEO of Adverif.ai, which uses machine learning to identify fake content. ”This is part of what makes GANs so powerful.” Adverif.ai has had success analyzing fake-news text and Photoshopped images, but Levy says that GANs pose a unique threat.

Past examples of faked imagery could often be detected, because they altered images already on the internet. Deepfakes, an automated tool that can animate a face from a still image onto a body in a video, has been used to used to create fake celebrity pornography. But by looking around the faces at the surrounding environment, a company named Gfycat could detect that these were simply altered videos.

With a GAN, there’s nothing to which a Gfycat could compare the “altered image,” because that image was actually generated. There is no source image.

So GANs are a technology that makes faking images and video more sophisticated, but they aren’t a culprit in this story. There hasn’t been an actual example of fake news or media from GANs yet, and while the deepfakes algorithm was convincing, it wasn’t a GAN.

Hany Farid, a professor at Princeton and forensics expert, told Nature that there are also new techniques being developed for detecting fake media, like catching the minuscule movements that indicate breathing or a pulse. These measures would make creating a believable fake that much more difficult.

“My adversary will have to implement all the forensic techniques that I use, so that the neural network can learn to circumvent these analyses: for example, by adding a pulse in,” Farid said. “In that way, I’ve made their job a little harder.”