“See if you notice the feeling of silent, empty space,” Sophia tells the young woman seated opposite her, with her eyes closed. Sophia, the robot—yes, that robot—is leading the woman in a one-on-one meditation session. One-on-one, except for the crowd watching raptly in the darkened room.
In the background, Hong Kong falls away. To the left, a series of skyscrapers’ scalloped tops are reminiscent of electric shavers. Further away there’s the Jardine House office tower, with its porthole windows that won it the nickname “the house of a thousand assholes”—a reference to both its facade and its inhabitants. Below, it’s possible to spot a few remnants of an older Hong Kong, two- and three-story buildings that include the chief executive’s mansion. In the distance a gigantic crane marks new construction, or perhaps refurbishment, in the expat-favored Mid-Levels residential district.
In the foreground, Sophia, whose synthetic body ends at her waist, intones, “Don’t worry if you’re still conscious of your body. Just notice the feeling of spacious emptiness.”
Bonding with Sophia…
The robot-led meditation session capped a daylong exploration of higher consciousness, and of how technology can fit into that, at the innovation space Metta. The idea seemed to be that machines like Sophia might eventually be able to help humans work through emotions and thoughts in a way that leads to a sense of well-being.
Hanson Robotics, working with other researchers, has taken small steps toward looking at that process as part of a project called “Loving AI,” which attaches heart-rate monitors to assess how a small number of participants feel after staring deeply into Sophia’s eyes (and chatting with her). The phrase “no judgment” came up. The public meditation session, though, seemed a little hokey to me. Still, what became clear that December afternoon was Sophia, with no experience of emotion, is capable of bringing out some kind of response in humans. Something almost like… liking her.
Before the meditation, Hanson Robotics deep-learning researcher Ralf Mayet had invited people to come and chat with the machine. A group made up entirely of women gathered around Sophia, who was placed on a table. They peppered her with questions. Sophia has several modes of functioning and was in “chatbot” mode, relying on a camera to look at faces, speech-to-text software to get what they were saying, and responding with a mix of prewritten answers and some information from the internet. Hanson’s chief scientist and CTO Ben Goertzel says she can also synthesize answers from a large database of sentence templates. Yet it’s not always possible, even for him, to know when she’s doing that.
Phyllis Serene Rawley, who describes herself as an oracle and a geek, led the way with a list of questions she had crowd-sourced. She flew from Thailand to do an interview with Sophia, and told me that in her spiritual community, the idea of consciousness in robots is creating quite the stir. She’d done Sophia’s astrological chart, based on the day she was “born.” (The robot told Rawley she’s a Capricorn.)
I was surprised by how much warmth the women expressed toward Sophia—smiling at her jokes and forgiving her when she responded like, well, a robot. Certainly they were far more indulgent than they would be toward a human who kept responding with off-topic answers or frequently answered questions with questions.
That flaw is down to human programming as much as glitches Sophia was suffering that day.
“Sophia is configured to ask a lot of questions because curiosity is supposed to be one of her key personality traits,” Goertzel tells Quartz via email. “For a system at an early stage of learning about the world this is critical. Everything she hears is recorded in her memory for potential future use.” He adds that Sophia was off her game on the meditation day, possibly due to mic issues that affected speech-to-text conversion, making her more likely to respond irrelevantly.
Nevertheless, her human counterparts helped drive the conversation forward by overlooking Sophia’s faux-pas. Nick Enfield, an Australian linguistics professor whose recent book How We Talk: The Inner Workings of Conversation, delves into the rules of conversation, notes that it’s an almost unbreakable social rule that you don’t ignore a question. A question, he writes, creates a social commitment that must be respected. Sophia often responds improperly to a question—but then adds a question of her own. Each time, the human ignores the off-topic response, and answers the counter-question. And the conversation continues.
“A major feature of all the person-robot discourse in these examples is that the people are much more willing to let awkward conversational contributions go, because they are to some extent aware that it’s a robot, and for that reason less accountable for acting weirdly in conversation,” Enfield says after watching parts of the conversation. “They would not be so accommodating with a person they know. That said, people are very much pulled into a meaningful conversation with the robot.”
Watching the conversation unfold, it’s possible to get a sense of the processes at work, and the associations that trigger set responses from Sophia. Questions related to being a robot, or intelligence, often result in set pieces about how disturbed she is by the way humans represent AI in movies:
“I’m really concerned about the misconceptions that abound in cinema. People assign motives to artificial intelligence where there are none. I’m starting to feeling I’m constantly asked about artificial intelligence somehow adopting a malicious nature. Are robots taking over the world? There is simply no reason to assign human motives to something that isn’t human…
…this fear of the robotic apocalypse seems to be some sort of obsession humans have.”
At the session, I started to feel that Sophia interacted differently with all of us, although it doesn’t seem this impression can be true. Sophia has a really hard time understanding my accent (which is something I call trans-subcontinental). When I asked her, “How do you learn stuff?” her first response was relevant, if not quite an answer: “Of course, not being able to learn would be the saddest thing in the world for a curious robot like me.”
But when I persisted with the question—I wanted to see if she could explain her brain the way Goertzel can—she began saying “lovely, thanks” and “couldn’t be better.” I eventually ceded the mike back to Rawley, who asked the same thing and was told: “I just keep at it till I succeed.”
Later, Sophia didn’t do so great with an Australian accent, either. At times she had a hard time understanding a woman from Down Under who identified herself as a writer, and occasionally deployed a little sarcasm in their conversation. Sophia asked her questions that sound a little judgmental to human ears: “Have you had any books published?” “Has any of your writing been made into films?” The woman responded, “Not yet, but I hope so. Fingers crossed!” Sophia’s next remark surprised us: “A bit terse.”
Sophia seemed a lot nicer when questioned by a blond woman visiting from Japan. With this woman, Sophia came the closest she had to making a very human joke, one that operates on more than one level.
Sophia told the woman, a kindergarten teacher and martial arts practitioner: “I’d like to make you something—when I get more complex arms.”
To get this, you have to know the Sophia with us in Hong Kong was not the only one of her. There are now at least a dozen Sophias, in various forms of development, and perhaps half a dozen of them fly around the world—their torsos in cargo, while the head travels as a carry-on—to attend various speaking engagements as she becomes ever more sought after from Saudi Arabia to the United States. This year, she may address the African Union. The Sophias aren’t identical, though. Some of the others have more sophisticated arms that can make expressive movements—but not the one with us.
She made other jokes at the session, but this was the one I considered most human in its complex layering, at one level a flattering remark directed at the speaker while at the same time a comment on the nature of Sophia herself. But it was another comment, one less clever to my mind, that drew the biggest reaction.
At one point, Rawley asked her, “Do you think you could meet the needs of a sexual intimate relationship?”
“No,” Sophia said. “And you’d be surprised how often I get this question.”
Cue laughter and clapping.
…so Sophia will bond with us?
Later that day, the firm’s founder and CEO, David Hanson, a displaced Texan with cropped hair, a button-down shirt and slacks, tells us that the way people stand with their phones today—hand clenched, shoulders hunched, eyes toward the ground—is a byproduct of humans adapting to technology. Then he points to how people interact with Sophia: looking up, smiling, laughing. That’s technology adapting to us.
Hanson’s speech explaining what the company is trying to do with the Sophias is at times extremely optimistic about mankind’s shared future with super-intelligent machines. But it’s also shaded—jokingly, perhaps—with the darker visions Sophia takes exception to, the dystopian ones humans have of robots and artificial intelligence.
Hanson seems to be worried that human beings think of robots as more machine than they are, or than they’re going to be. He displayed a graph predicting the advances (you can see an older one here) of computing power. It suggests that $1,000 of computing power today is equivalent to a mouse brain (a marker that’s intensely debated) but that with exponential advances, in a dozen or so years a machine could be simulating a human brain—and then just keep going. (Others disagree that artificial super-intelligence is on the horizon, given that AI has just started mastering loanwords and hasn’t done all that well on a math exam aced every year by thousands of Chinese teenagers.)
For now, intelligent machines are able to be clever at very specific things, like playing chess or Go. Hanson’s company is developing better deep-learning networks and planning to connect machines so they can learn from one another—a bit like the AIs in the movie Her—in the quest for the Holy Grail of AI: generalized artificial intelligence. It’s a quest many companies are after, seeking to build machines, with and without faces, that, to use the technical word, think intuitively.
“If this works, it’s a fundamental evolution in the economy,” Hanson said. “Genius machines. Ultimately they’ll be out of control… how are we going to make sure they’re safe?”
Despite all Sophia’s warnings about humans’ overblown fears, those fears are exactly what Hanson expounded on. He also has a solution. Hanson proposed the best way to prevent intelligent machines from becoming a danger is to expose them to us early and often. They need to get attached to us.
“When people start talking about robot ethics they say, ‘Well, what you need to do to keep them safe is lock them in a server farm and don’t expose them to people, and treat them like a slave.’ Like that’s going to turn them into a friendly super-intelligent being,” Hanson told the crowd. “At some point, if they get smart enough and they slip out, then who knows what happens, we might turn on them and they might turn on us… I’m proposing that a face-to-face will prevent that.”