A seminal moment in the history of artificial intelligence (AI) occurred on Oct. 11, when a robot addressed the UK’s House of Lords to discuss the topic of AI-created art.
The focus of the meeting was Ai-Da, the AI-controlled, humanoid robot conceived by art dealer Aidan Meller and researcher Lucy Seal. The software algorithms controlling Ai-Da were developed by researchers at Oxford University, while the hardware components were produced by Engineered Arts and engineering students Salah Al Abd and Ziad Abass.
The demonstration of the robot, which made its debut in 2019, quickly gave way to serious discussion of how AI-produced art might impact humans in the future. The assembled lawmakers appeared uneasy throughout the presentation, and at times unsure of how to address the robot in the room, as both Meller and Ai-Da responded to their questions.
Aside from the novelty of having, for the first time, a robot addressing the UK House of Lords, the Parliamentary proceedings were notable for Meller’s stark conclusions about who, or what, can engage in creative thinking.
“Creativity is not restricted to subjective internal conscious brain processes,” Meller said. “Creativity can very much be done very well by AI in lots of remarkable ways. It can be studied and mimicked, and that is a game changer.”
Technologies for robots that are competitive with humans in the realm of creativity are already here
AI tools like DALL-E already allow anyone to use text prompts to create art that mimics human brush strokes with surprising accuracy. Some traditional artists have responded by arguing the human ability to artfully put paint brush to canvas or pencil to paper is the differentiator between AI creations in the digital realm and the offline creation of human art.
The existence of Ai-Da pushes back against such a notion. Ai-Da’s art, which predates DALL-E by a couple of years, is generally demonstrated as an offline exercise.
Ai-Da’s art is produced after cameras in the robot’s eye sockets capture images in the real world. Those images are then processed through its algorithm and output via its robotic arm onto a canvas or paper surface.
It doesn’t take much imagination to envision Ai-Da’s physical form harnessing the powers of DALL-E 2 or another AI-art generator like Midjourney to produce polished, original works that replicate the style of Rembrandt or Monet.
That creeping sense of potential human obsolescence as AI improves
“I am actually partly terrified by what you’ve been saying, because from someone who knows very little about this field, this feeds into all the films about AI taking over the world,” House of Lords member Lynne Featherstone said during the Communications and Digital Committee meeting with Ai-Da and Meller.
“[Ai-Da is] so sophisticated compared to what I thought we were going to see today… You’re saying that because she is learning all the time, this is just going to increase exponentially…[W]e can’t tell what effect it’s going to have on on the creative world really.”
The constant improvement of AI systems underlies the anxiety gripping various industries that could be impacted by AI. What might be an artist’s assistive tool today could eventually use its machine learning model to surpass even the most brilliant human artists.
“AI is something that can be an enormous generator of help for an artist, or it could replace the artist,” Meller said when asked about the technology’s role alongside human creators. “I think both will actually happen.”