This article contains some spoilers for season four of Netflix’s Black Mirror.
The new season of Black Mirror is finally here, and with it lots more frightening futuristic gadgets to dissect.
Netflix’s dystopian anthology series paints a dismal portrait of our future, leading many to believe the show is anti-technology. Executive producers Charlie Brooke and Annabel Jones have said that’s not the case. Rather, Black Mirror is a satire, ribbing our tech addiction by showing deeply exaggerated stories of a future that may come to pass if we don’t become critical of our relationship with the devices that run our lives.
The fourth season of the show presents another raft of believably near-future technology: parenting surveillance systems, memory recall and recording machines, highly advanced dating apps, inescapable virtual realities, cloud consciousness, and, of course, killer robot dogs.
Quartz entertainment reporter Adam Epstein consulted Quartz tech reporter and future expert Mike Murphy on some of his burning questions after watching season four of Black Mirror: Can our parents really spy on our every move? When will those robot dogs come to kill us all?
Adam: Hello again, Mike.
Mike: Hi Adam.
Adam: Last time played this little game, we talked about the technology in Blade Runner 2049, which imagined a dystopian future 30 years from now where we have flying cars and extremely humanlike robots that are capable of reproducing on their own.
Black Mirror feels a little closer to home. Most of the episodes don’t take place too far into the future, so a lot of the tech are just extrapolations of concepts we’re already quite familiar with: virtual reality, social media, surveillance, that sort of thing.
Mike: True, though somehow most of them end up seeming more frightening than Denis Villeneuve’s movie was.
Adam: Wow, burn. Okay, there isn’t really an official order for these episodes, but we’ll discuss in the order I watched them in, starting with “Arkangel.” This was an engaging tech thriller about parenting and paranoia, but it was probably my least favorite of the new season. Still, the tech at the heart of the episode is terrifying.
After her young daughter briefly goes missing at the playground, a single mother resolves to have her injected with an implant which, when connected to an app on a tablet, allows the mother to literally see through her daughter’s eyes. And not only can she see what her daughter is seeing at all times, but she can check her vitals, location, and even censor things that she deems inappropriate for her daughter to experience. For instance, she turns the filter on for the nasty, aggressive neighborhood dog, so when her daughter walks by it, all she sees is a strange blob making harmless noises. It’s like TV pixelation, except in real life.
Of course, this causes some problems for the daughter as she gets older. The episode is a pretty obvious satire of helicopter parenting, with some quips about modern surveillance techniques and the “Big Brotherization” of mega companies like Google for good measure. How close are we to a future where you can literally keep an eye on someone by…using their own eyes?
Mike: We’re pretty far from being able to inject computers into our bloodstream, though there are a lot of people working on that concept. For the most part, they’re being developed to treat diseases or monitor vitals, which does happen in this episode. Alphabet, Google’s parent company, is also working on computers that can be worn as contact lenses.
We already have wearables that can track vital signs, and apps that parents can install on their kids’ phones to track them and monitor where they are and what they’re seeing on their phones. If they were super weird, they theoretically could strap a livestream camera onto their kid’s head and see everything they were seeing in realtime, but that would be a little more conspicuous than computers in your bloodstream.
Adam: Don’t get any ideas, parents. The next episode, “Crocodile,” takes on memory in an interesting way. I have no idea why it’s called “Crocodile,” as there are neither real crocodiles nor metaphorical ones, but that’s not important right now.
The basis of the episode is this: Some time in the very near future, insurance agents can resolve accident claims by hooking witnesses up to a machine which records that person’s subjective memory of an event. Once plugged in, the insurance agent can see a playback of the subject’s memories, as if it were a movie. She can pause, rewind, zoom in, and do pretty much anything you could do if you were editing some footage you shot on your iPhone. Except this isn’t real footage, it’s someone’s internal memory, saved forever on the hard drive that is the brain.
Without giving too much away, the machine doesn’t just record a person’s memory of a specific event that’s being investigated—it sees everything, including things the subject definitely doesn’t want recorded for posterity. (The episode references this technology being used by law enforcement agencies as well, for obvious reasons.)
Oddly enough, there was a very similar invention in Blade Runner 2049, which you said felt ripped from an episode of Black Mirror, so I think that means we have now triggered the singularity. That one, though, was more complex, involving the designing and editing of memories. This one is more just about being able to visualize someone’s existing memory to aid in an investigation, since memories can’t hide the truth as well as people can.
Mike: Yeah, that’s pretty funny, as there were definitely a few episodes in earlier seasons that used technology to treat memories like home movies that could be watched and re-watched. As I said then, the Pentagon is working on technology to restore memory to those with neurodegenerative diseases and brain injuries, but this is still likely decades from being something that we could all have access to, and it would probably just restore our own ability to recall memories, rather than allow someone to hook us up to a TV and watch them like a Netflix TV show.
Adam: Moving on then. The third episode, one of my favorites, is “Hang the DJ,” which explored the question, “What will Match.com look like in a hundred years?”
The episode features a dating system, controlled by an all-seeing, ostensibly all-knowing artificial intelligence, that pairs individuals up based on a highly advanced machine-learning algorithm that takes information gleaned from each relationship and uses it to help people find “the one.”
Each relationship brings the subject one step closer to finding their mathematically perfect soulmate. And weirdest of all, for each relationship, if the blind daters agree, they can see how long the AI has allotted for their relationship before it even begins. Fall in love with your match? Too bad, you only have a few hours to spend together. Can’t stand your match? Sorry, the system expects you to spend two years together.
This is probably bringing up a number of questions about how, exactly, this dating app works, most of which can be answered by the fact that—spoiler!—this all appears to be happening in a closed-off, dystopian community from which there is no escape. Armed guards watch your every move, ensuring you adhere to the rules.
I kind of have to explain the second twist here, so spoilers (again): It’s a simulation. Every relationship in this community, including the episode’s central one that viewers become invested in, are just metaphorical renderings of all the 1s and 0s in your dating app that try to match you with likeminded souls. So, ultimately, it’s sort of just a goofy personification of Bumble I think?
But I still want you to assess the deeply disturbing AI dating system featured in the episode, before viewers realize it’s all a simulation. It claims to eventually find you your perfect match, within something like 99.9% accuracy. Can love be found with math?
Mike: Love, as irrational as it may feel at times, is just biology and chemistry, and all of the natural world is really just applied mathematics. So I could definitely see some infinitely powerful computer in the future being able to suss out what makes us us (we’re already trying to answer this question now) and apply that to another person’s characteristics. Throw in some Matrix-like VR simulation into the mix, and it sounds like we’d be able to recreate this episode. Whether we’d want that to replace serendipity, even as small as swiping right on someone on Bumble, is another matter.
Adam: Very deep, Mike. Now it’s time to discuss my favorite episode of the season, “USS Callister,” which I suspect will also be many others’ favorite. It’s just such a blast.
A satire of popular space operas (namely Star Trek), “USS Callister” follows the young, charismatic captain of an Enterprise-like starship and his loyal crew as they take on the villains and monsters of deep space. It’s a bizarre, but still effective spoof of space tropes, until you realize, wait, this is Black Mirror, there has to be something else going on here. This can’t just be Spaceballs.
And that’s when you’d be correct. (More major spoilers ahead.) Perhaps not shockingly, this Star Trek-esque story is actually a virtual reality computer game being controlled by an aloof, possibly depressed CTO who has made himself the hero of his own game. As the creator of the game, he knows how to hack it, so he’s made a modified version that allows him to captain a spaceship, a la James T. Kirk. What’s more: Using the DNA of his colleagues, he has uploaded copies of their consciousnesses into the game and trapped them there, forever the slaves to his twisted digital vision. Their counterparts in the real world have no idea that version of themselves have been imprisoned inside a virtual reality.
I know the sheer virtual reality aspect of the episode is not very far off, even if this game is much more detailed and customizable than what games can currently offer. But using DNA to harness one’s consciousness and then uploading that to the virtual reality? That seems like crazy talk to me.
Mike: I feel like if we’re at a point where we can instantaneously analyze someone’s DNA and copy it well enough to mimic their consciousness, we’d probably have achieved the singularity and be able to upload our own minds into the cloud. So we’d probably have transcended making video games.
But there will probably be other ways to copy our likenesses in the near future that won’t require our DNA. Apps like Replika are trying to mimic our idiosyncrasies to the point where bot versions of us could replace on customer service calls or social media. Throw 3D-scanning technology into the mix and I could easily see something like this existing in the future, without the need for literally replicating us through our DNA.
Adam: The fifth episode is a black-and-white post-apocalyptic horror short film called “Metalhead.” There isn’t a whole lot to explain: At some point in the near future, in a place that’s never explicitly stated (but maybe the English countryside?), humans are few and far between, perpetually on the run from dog-sized autonomous robots who viciously murder any humans they see on sight.
It’s unclear what’s motivating these robo-dogs. Perhaps it’s their programming, the remainders of a military experiment gone wrong. Perhaps, like Skynet in the Terminator movies, they’ve gone completely rogue and want to take over the planet. Either way, they don’t much care for humans.
The most interesting thing about these killer doggies is that it appears as though they can hack and control other pieces of technology. In pursuit of its human prey, one of these dog things gets into a car and figures out how to make it run. Another one outsmarts an expensive home security system. In a post-apocalyptic world filled with so much discarded tech, the dogs don’t have much trouble tracking and killing the humans still left.
The dogs can do some other cool things, like identify sounds from very far away and pinpoint the location of the source. But the scariest proposition imagined by “Metalhead” is one that’s been posed by many other sci-fi films and shows: What happens when the robots try to kill their creators?
Mike, when is that going to happen? Give it to me straight.
Mike: Well, if we’re lucky, quite soon! Boston Dynamics, a robotics company that used to be owned by Alphabet and is now owned by SoftBank, has been working on robot dogs for years—and they keep getting better and better. Boston Dynamics has a history of bullying its creations, so if they do rise up soon, it wouldn’t be too surprising if they went after their makers first!
In all seriousness, Boston Dynamics builds some of the most advanced robots in the world, and they’re still relatively rudimentary compared to us. For the foreseeable future, I’m less concerned about the robots rising up and turning against us, but rather these tools being used by bad human actors for nefarious purposes. We’re already seeing small robots, like drones, being used as bomb couriers in the Middle East.
Adam: That’s a relief. Let’s tackle the three-pronged final episode of the new season, “Black Museum,” an anthology of sorts, telling the stories of three pieces of technology that are now stored at a tacky roadside museum somewhere out in the American desert.
The first is an apparatus that, when placed over someone’s head, allows the wearer to feel the pain and other sensations of another person. It’s given to an emergency room doctor as a way for him to help diagnose patients in crisis. Once he knows exactly what an aortic dissection feels like, for instance, he can diagnose one immediately, possibly saving a patient’s life. Cool, if a little morbid, right?
Wrong! The doctor becomes addicted to the pain, soon seeking it out in his personal life. Eventually, no amount of pain is enough for him, and, well, things end poorly.
The next piece of tech on display at this museum is a teddy bear. Not just any teddy bear, but a teddy bear that has the consciousness of a comatose woman uploaded into it. How’d it get there, you ask? Well, when the woman went into a coma, her husband decided to go ahead with a controversial experimental procedure that implanted her consciousness into his, creating a dual persona within his body. She no longer existed in physical form, but she could see what he saw, feel what he felt, and converse with him. (To an outsider, it looked like he was having a two-sided conversation with a ghost.) Naturally, this confused their young son quite a bit. Mommy was literally living inside of daddy.
After a few years of this, daddy couldn’t take it anymore, and decided to having his wife’s consciousness removed from his brain and placed into an inanimate object: a teddy bear. She could “communicate” through a limited series of voice commands, but now had even less agency than she did before. It’s utterly horrifying.
The last piece of tech is similar: Before he dies, a death row inmate’s consciousness is saved and uploaded to the cloud, effectively giving him immortality. Except, his consciousness is used to entertain tourists who want to virtually execute him over, and over, and over again. So, yes, he’s “alive,” but he must endure an infinite, unfathomable torture by being electrocuted for the rest of time. (It’s deeply morbid but, fortunately, has a slightly happier ending than the teddy bear.)
So that about covers “Black Museum.” Mike, your thoughts on the pain apparatus, teddy bear consciousness, and sadistic electrocution exhibit for tourists?
Mike: The pain apparatus feels like an extreme version of people who get addicted to the pain of tattoos (or…other things…) and can’t stop. I can see some sort of empathetic VR setup becoming a reality. We already have bodysuits that can mimic the feeling of a gunshot for VR games, so perhaps this isn’t that far-fetched, but I’d hope we wouldn’t want to use it so masochistically.
I do think we’ll get to a point in the very distant future where we’ll be able to upload our consciousness to the cloud, but I’d hope that whatever agency we have is better than basically one of those stuffed-toy nanny cams we have today.
I don’t know whether we’d be able to upload our consciousness into another brain, or whether there’d really ever be space for two people’s thoughts and memories in one person’s grey matter. That’s more of a question for a cognitive scientist.
There have been a few episodes of Black Mirror dealing with Sisyphean tortures of uploading a consciousness into some AI construct and having it live out an eternity in some bleak fashion, and this prisoner seems to be another one. Right now, we throw people in jail for life because it’s more humane than killing them (in most instances), so I can’t really imagine us artificially extending someone’s life indefinitely to torture them like this. But then again, we do give our worst offenders multiple life sentences in cases where it’s deserved, so perhaps this could be a way for them to actually fulfill their sentences?