Blade Runner is the perfect film for experimentation with artificial intelligence. In Ridley Scott’s sci-fi classic, Rick Deckard (played by Harrison Ford) hunts down and “retires” humanoid robots called replicants. A new class of replicants called “Nexus-6” are capable of yielding human-like emotional responses, further blurring the line between who is human and who is machine. This journey eventually leads Deckard to question his own humanity.
So it’s all too fitting that a reconstruction of the film created by an AI could be mistaken for the real thing.
Vox reported yesterday that Warner Bros. issued a DMCA takedown notice to Vimeo for hosting videos that appeared to be of the 1982 film. Except they were not direct copies, but rather trippy, hallucinatory reconstructions made by an artificial neural network as part of a researcher’s project on autoencoding.
Terence Broad, a research student and artist at the University of London, wanted to teach a neural network to reassemble the jumbled pieces of video data within Blade Runner back into a coherent film—essentially replacing the need for normal processes of video decoding and encoding usually done by humans.
Broad first taught the network to identify frames from Blade Runner by feeding it data from the film and contrasting them with data from elsewhere. Then he got the network to, in a manner of speaking, “watch” the movie. It broke down every frame into a string of digits and then tried to reconstruct that data sample to match the original to the best of its ability.
This is what it spit back out:
While the neural network reconstruction of Blade Runner clearly looks a bit different from the original, it was close enough that Warner Bros.’ automatic takedown bot that crawls the web for instances of copyright infringement got Vimeo to take the videos down. Broad wrote on Twitter that only the reconstructed videos of Blade Runner were taken down—not those with actual footage of the film shown side-by-side, like the one above. Broad also said the reconstructed videos were later restored.
While this reconstruction isn’t so much a machine interpretation as it is a compression, teaching AI to “interpret” visual works is all the rage in machine learning. Last year Google taught its image recognition software to “dream” and the results were similarly fascinating, yet somewhat disturbing. Shortly after Google published its research, someone fed the film Fear and Loathing in Las Vegas through the artificial network.
Broad used the same autoencoding method on Richard Linklater’s animated sci-fi thriller A Scanner Darkly (which, like Blade Runner, was based on a Philip K. Dick novel). You can read Broad’s blog post about the project on Medium, or, if you’re feeling really adventurous, you can delve into Broad’s actual dissertation here. You can also watch an autoencoded version of the film in its entirety here, if you’re curious what it might be like to watch it on psychedelic mushrooms.
Here’s Blade Runner‘s legendary “Tears in Rain” scene, as “seen” through the eyes of an artificial intelligence:
With such a wild imagination, it’s too bad this neural network can’t actually live. But then again, who does?