In 2004, while conducting training exercises over the Pacific Ocean, two US Navy pilots reported something extraordinary: a mysterious, blindingly fast flying object that caused the sea to boil, rotated mid-air, and could fly more than 60 miles in under a minute, despite having no visible means of propulsion.
The episode was just one of many investigated by a highly secretive Department of Defense program that ran from 2007 to 2012 and was championed by a handful of US Senators. A New York Times report on the program mentioned a clandestine government warehouse in Nevada that may or may not be storing “alien alloys” recovered from similar flying objects.
Once upon a time, rumors and revelations like these would have sent the public into a frenzy. They would have joined Roswell and Area 51 in the pantheon of UFO-conspiracy chatter. In 2018, however, the possible existence of alien life struggles to cut through a news cycle dominated by earthly worries such as sexual-assault scandals, government shutdowns, and corporate tax-cut bills. Instead, we’re more worried about a different threat to life as we know it: artificial intelligence.
An alien love affair
Not that long ago, society was still totally enthralled by the prospect of little green men and unidentified flying objects. For decades we were fascinated by the unknown universe and what it might hold—an obsession that was played back to us on film and TV.
Steven Spielberg’s Close Encounters of the Third Kind imagined harmonious first contact with interstellar travelers. E.T. the Extra-Terrestrial raised kids’ hopes that they might stumble across a new alien friend in the toolshed. George Lucas’s original Star Wars trilogy conjured whole galaxies teeming with alien life. And Star Trek presented a hopeful vision of humanity stepping out into a wondrous cosmos filled with discovery, exploration, and adventure.
The flip side, of course, was the worry that whatever we found out there would either try to kill us or enslave us. James Cameron’s Alien franchise exploited the fear that we would lose our position atop the galactic food chain. Other-worldly creatures infiltrated our bodies and minds in The X-Files. In the first episode of South Park, Eric Cartman was beamed aboard a flying saucer and anally probed. And on The Simpsons, Kang and Kodos rigged the 1996 US presidential election with an audacity that would make Vladimir Putin blush.
Twenty years later, sci-fi is still a crowd-pleaser. As The Last Jedi’s accountants will tell you, we still love a good space opera. Arrival and Life have recently told alien-encounter stories in interesting and successful ways, and we even still have time for a big, bad alien villain, which we’ll get in 2018 courtesy of The Avengers: Infinity War.
But the prospect of encountering creatures from beyond the stars no longer quite fills us with both the wonder and fear it used to. We don’t need to go to outer space to find the future anymore: It’s right here in our pockets (and it can also order us the best Chinese food in the area).
The robot revolution
As we’ve turned our gaze away from the stars and toward our screens, our anxiety about humanity’s ultimate fate has shifted along with it. No longer are we afraid of aliens taking our freedom: It’s the technology we’re building on our own turf we should be worried about.
The advent of artificial intelligence is increasingly bringing about the kinds of disturbing scenarios the old alien blockbusters warned us about. In 2016, Microsoft’s first attempt at a functioning AI bot, Tay, became a Hitler-loving mess an hour after it launched. Tesla CEO Elon Musk urged the United Nations to ban the use of AI in weapons before it becomes “the third revolution in warfare.” And in China, AI surveillance cameras are being rolled out by the government to track 1.3 billion people at a level Big Brother could only dream of.
Will Smith went from saving Earth from alien destruction to saving it from robot servants run amok. As AI’s presence in film and TV has evolved, space creatures blowing us up now seems almost quaint compared to the frightening uncertainties of an computer-centric world. Will Smith went from saving Earth from alien destruction to saving it from robot servants run amok. More recently, Ex Machina, Chappie, and Transcendence have all explored the complexities that arise when the lines between human and robot blur.
However, sentient machines aren’t a new anxiety. It arguably all started with Ridley Scott’s 1982 cult classic, Blade Runner. It’s a stunning depiction of a sprawling, smog-choked future, filled with bounty hunters muttering “enhance” at grainy pictures on computer screens. (“Alexa, enlarge image.”) The neo-noir epic popularized the concept of intelligent machines being virtually indistinguishable from humans and asked the audience where our humanity ends and theirs begin.
Two years later came The Terminator, a franchise that replaced Scott’s existential, mood-lit pondering with a leather-clad robotic Arnold Schwarzenegger packing a big-ass gun. “Inhuman, relentless, unstoppable,” The Terminator’s trailer promised, cementing the image of a blank-faced, implacable avatar of destruction in the popular consciousness, and warning of a world in which we engineer our own demise.
The Wachowski sisters went even further in 1999’s The Matrix, appealing to scrutinizing minds who couldn’t quite shake the nagging sense that something was fundamentally off with the world. “What is ‘real’? How do you define ‘real’?” asked an impeccably dressed Morpheus, reducing humanity to a glorified powerpack designed to fuel robotic domination. Scientists and cranks have been debating whether we only exist in a giant simulation ever since.
More recently, Spike Jonze’s Her in 2013 depicted a near-future in which we do more than rely on AI personal assistants: We bond with them, fall in love with them, and, in some cases, try to have sex with them. Blade Runner 2049 also ran with the idea that artificially created replicants might form bonds with humans outside the roles imagined by their creators. People already treat digital assistants such as Amazon’s Alexa and Apple’s Siri like real people, but Jonze and Scott went further, imagining an AI that will outgrow us emotionally and spiritually as well as intellectually, leaving us even more alone than before.
In the 30 years since Blade Runner first made us question our impending intimate relationship with machines, the need to set AI fables in the future has fallen away. While heavier sci-fi fare like Electric Dreams still leap dozens of years ahead, the final season of Parks and Recreation depicted a dystopian present in which a cheery-faced global tech company exploits people’s private data to send them personalized gift baskets via drone.
And then there’s Black Mirror. Besides the eerie accuracy with which it predicted a scandal involving a British prime minister getting frisky with a pig, it regularly throws up nightmarish scenarios that are close enough to real life to make the show almost seem prophetic: a digital “grain” installed behind one’s ear recording everything we see and hear; an app that rewards and punishes people based on a crowdsourced social ranking; and hackers blackmailing people by secretly recording them masturbating through their laptop camera.
Even alien sci-fi now acknowledges that we’ve got worse things to worry about than extra-terrestrials: ourselves.
In 2009’s Avatar, Cameron flipped the aliens-vs-humans script he had created 30 years earlier in Alien. In this narrative, the alien Na’vi are the underdogs, fighting a greedy, militaristic enemy—us. The battleground isn’t Earth: It’s Pandora, a resource-rich moon that a private security firm fights to secure for a mining corporation. The rest of the universe isn’t trying to doom us; humans are doing a pretty good job of that themselves.
If that message cuts a little too close to home, so too does the warning inherent in Stanley Kubrick’s 2001: A Space Odyssey. While Kubrick’s depiction of a malignant, inhuman intelligence that rebels against its creators was groundbreaking, HAL 9000’s capacity for emotion was just as memorable. “I’m afraid. I’m afraid, Dave. My mind is going. I can feel it,” HAL says tonelessly as he is forcibly deactivated, pleading to hold onto the sentience humanity briefly gifted it before ripping it away.
As culture has shown, our fears of both space and super-human computers are nothing compared to the human capacity for cruelty. Even in our most high-flown, sci-fi blockbusters, the message was there all along:
It’s not aliens or artificial intelligence we should fear, for we are our own worst enemies.