Skip to navigationSkip to content
THE FUTURE OF FACT

The dystopian digital future of fake media

A woman wearing a motion capture suit.
Reuters/Toru Hanai
Dismantling our concept of truth.
  • Hany Farid
By Hany Farid

Professor, electrical engineering & computer science and the school of Information, University of California, Berkeley

This article is more than 2 years old.

This story is part of What Happens Next, our complete guide to understanding the future. Read more predictions about the Future of Fact.

Imagine a world in which we can no longer trust or believe news reports of global conflict, social uprisings, police misconduct, or natural disasters. Imagine a world in which we can no longer believe what our world leaders say in public—or private. Imagine a world in which we simply cannot separate fact from fiction. In this world, how will we function as a democracy, economy, or society?

Advances in digital imaging are allowing digital photographs, videos, and audio recordings to be altered in ways that would have been unimaginable 10 years ago. The democratization of access to sophisticated digital-imaging technologies, powerful machine-learning algorithms, and unprecedented computing power have made it easier to create sophisticated and compelling fakes.

This future is already here. With only a few hours of audio of a person talking, you can synthesize a recording of that person saying just about anything you desire. With only a few hundred images of a person, you can synthesize a video of that person’s face sewn onto another person’s body, facial expressions, head movements, and all.

Combine these two technologies, and it is possible to generate a convincing fake video of a world leader saying whatever you want them to: Anything from an official announcement of a nuclear strike to a private recording where they admit to colluding with a foreign government to win a national election. The public won’t know the difference. Finally, plug the fruits of this powerful technology into the speed and reach of social media, and before any professional source can debunk the broadcast, it’s spread too far to convince viewers of its inauthenticity.

We are rapidly heading toward a dystopian digital future in which self-organizing online communities will be able to create and disseminate their own world of pseudo-facts and pseudo-reality. We are already seeing glimpses of this kind of order. For example, a fake image of Parkland shooting survivor Emma Gonzalez, purportedly tearing up the US constitution, quickly went viral, even after being debunked. (The original photo shows her tearing up a shooting target.) This image continues to be shared online, adding fuel to the conspiracy theories claiming that Gonzalez and her fellow students are paid actors.

If doctored images and videos, fake news, and conspiracies are the virus, social media is the host. Facebook, YouTube, and Twitter do not just allow this content to survive: They actively promote it for financial gain. The core business model of these platforms is engagement—to keep you mindlessly clicking, liking, sharing, and tweeting for as many hours as possible. And sensational, outrageous, and downright fake content often engages us more than the real stuff.

This is partly the fault of our psychology, but it is also the fault of social media. These platforms must take more responsibility for moderating their networks away from clickbait and toward more meaningful and trustworthy content, as Facebook founder and CEO Mark Zuckerberg repeatedly promised during his recent US congressional hearings. This will require a combination of new policies and business practices, as well as the development and deployment of new technologies.

If doctored images and videos, fake news, and conspiracies are the virus, social media is the host.

If fake news is the virus and social media is the host, then advertisers are the vaccine. Social media platforms survive because of advertising dollars. The corporate titans of the world have tremendous power to effect change by withholding advertising dollars until these platforms operate in a more socially responsible way. If they want to partake in the solution, then they should wield this power.

But it’s not all on the companies: We as consumers have to get smarter and more critical of what we read and see. We need to get out of our echo chambers and engage with facts and reality in a less partisan and myopic way. We have to demand that social media platforms and advertisers act more responsibly.

Academics such as myself also have a role to play in continuing to develop and deploy technology that will help the public discriminate between the real and fake. As the technology that allows for digital media to be manipulated develops so quickly, this is an enormous challenge. Scientific-funding agencies around the world should create programs, like the DARPA MediFor program, of which I am a part, to support scientists’ efforts to develop the next generation of digital-authentication tools.

If social media platforms refuse to respond to this growing threat, then legislators will have to consider imposing regulations. This has already begun in the European Union, and after numerous congressional hearings this past year on top of the Facebook/Cambridge Analytica scandal, US legislators have, with good cause, grown weary of waiting for Silicon Valley to self-regulate.

The future is not lost, but it is fragile. If we continue along the trajectory of the past decade, then we will be plunged into a digital future in which fact and reality are long-lost concepts. On the other hand, if we begin the needed reforms outlined above, then we can return the internet to its original promise and harness its awesome power for good.

This story is part of What Happens Next, our complete guide to understanding the future. Read more predictions about the Future of Fact.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.