In my brief period handling PARO, I can’t say I felt anything more than mild amusement–and certainly not companionship. Dogs and cats can do their own thing; they can ignore you, bite you or leave the room. Simply by staying with you they’re saying something. PARO’s continuing presence says nothing.

But then, I’m not frail, isolated, lonely or living in a care home. If I were, my response might be different, especially if I had dementia, one of the conditions for which PARO therapy has generated particular interest. Shibata reports that his robots can reduce anxiety and aggression in people with dementia, improve their sleep and limit their need for medication. It also lessens their hazardous tendency to go wandering and boosts their capacity to communicate.

This value as a social mediator interests Amanda Sharkey and colleagues at the University of Sheffield. “With dementia in particular it can become difficult to have a conversation, and PARO can be useful for that,” she says. “There is some experimental evidence, but it’s not as strong as it might be.” She and her colleagues are setting up more rigorous experiments. But she actually finds the calculated use of a PARO for worrying.

“You might begin to imagine that your old person is taken care of because they’ve got a robot companion,” she says. “It could be misused in a care home by thinking, ‘Oh well, don’t bother to talk to her, she’s got the PARO, that’ll keep her occupied.’” I raise this issue with Shibata. He insists it isn’t a risk but, despite my pressing the point, is unable to say why it couldn’t happen.

Reid Simmons of the Robotics Institute at Carnegie Mellon University tells me that it doesn’t make sense to pretend you can create a robot that serves our physical needs without evoking some sense of companionship. “They’re inextricably linked. Any robot that is going to be able to provide physical help for people is going to have to interact with them on a social level.”

Belpaeme agrees. “Our brains are hard-wired to be social. We’re aware of anything that is animate, that moves, that has agency or that looks lifelike. We can’t stop doing it, even if it’s clearly a piece of technology.”

Help from the Care-O-bot

Hatfield, Hertfordshire. An apparently normal house in a residential part of town. Once through the front door I’m confronted by a chunky greeter, just below my shoulder height. Its black-and-white colour scheme is faintly penguin-like, but overall it reminds me of an eccentrically designed petrol pump. It’s called a Care-O-bot. It doesn’t speak, but welcomes me with a message displayed on a touch screen projecting forward of its belly region.

Care-O-bot asks me to accompany it to the kitchen to choose a drink, then invites me to take a seat in the living room, following along with a bottle of water carried on its touch screen, now flipped over to serve as a tray. My mechanical servant glides silently forwards on invisible wheels, pausing to perform a slow and oddly graceful pirouette as it confirms the location of other people or moveable objects within its domain.

Parking itself beside my table, Care-O-bot unfurls its single arm to grasp the water bottle and place it in front of me. Well, almost–it actually puts it down at the far end of the table, beyond my reach. Five minutes in Care-O-bot’s company and already I’m thinking of complaining about the service.

The building I’m in, known as the robot house, is owned by the University of Hertfordshire. It was bought a few years ago because a university campus laboratory is not an ideal setting in which to assess how experimental subjects might find life with a robot in an everyday domestic environment. A three-bedroom house set among others in ordinary use provides a more realistic context.

The ordinariness of the house is, of course, an illusion. Sensors and cameras throughout it track people’s positions and movements and relay them to the robots. Also monitored are the activity of kitchen and all other domestic appliances, whether doors and cupboards are open or closed, whether taps are running–everything, in short, that features in our activities of daily living.

Image for article titled The one-armed robot that will look after me until I die
Image: CC-BY: Thomas Farnetti/Wellcome Images for Mosaic

Joe Saunders, a research fellow in the university’s Adaptive Systems Research Group, likens Care-O-bot to a butler. Decidedly unbutlerish is the powerful articulated arm that it kept tucked discretely behind its back until it needed to serve my water. The arm is “powerful enough to rip plaster off the walls,” says Saunders cheerfully. “This robot’s a research version,” he adds. “We’d expect the real versions to be much smaller.” But even this brute, carefully tamed, has proved acceptable to some 200 elderly people who’ve interacted with it during trials in France and Germany as well as at Hatfield.

As Tony Belpaeme pointed out to me, the robots we have right now don’t have the skills that are most needed: the ability to tidy houses, help people get dressed and the like. These things, simple for us, are tough for machines. Newer Care-O-bot models can at least respond to spoken commands and speak themselves. That’s a relief because, to be honest, it’s Care-O-bot’s silence I find most disconcerting. I don’t want idle chatter, but a simple declaration of what it’s doing or about to do would be reassuring.

I soon realize that until the novelty of this experience wears off, it’s hard for me to judge what it might feel like to share my living space with a mobile but inanimate being. Would I find an advanced version of Care-O-bot–one that really could fetch breakfast, do the washing up and make the beds–difficult to live with? I don’t think so. But what of more intimate tasks–if, for example, I became incontinent? Would I cope with Care-O-bot wiping me? If I had confidence in it, yes, I think so. It would be less embarrassing than having the same service performed by another human.

After much reflection, I think adjusting to the physical presence of a robot is the easy bit. It’s the feelings we develop about them that are more problematic.

Kerstin Dautenhahn, of the Hatfield robot house, is Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire. “We are interested in helping people who are still living in their own homes to stay there independently for as long as possible,” she says. Her robots are not built to be companions, but she recognises that they will, to a degree, become companions to the people they serve.

“If a robot has been programmed to recognise human facial expressions and it sees you are sad, it can approach you, and if it has an arm it might try to comfort you and ask why you’re sad.” But, she says, it’s a simulation of compassion, not the real thing.

This worries me. But it also puzzles me. If dogs, cats, robot seals and egg-shaped keyrings can so easily evoke feelings of companionship, why should I be exercised about it?

Charlie, the robot I played the sorting game with, is designed to entertain children while helping them learn about their own illnesses. (Charlie is also used in a therapy for children with autism.) When children are introduced to Charlie, they’re told that it too has to learn about their illness, so they’ll do it together. They’re told the robot knows a bit about diabetes, but makes mistakes.

“This is comforting for children,” says Belpaeme. “If Charlie makes a mistake they can correct it. The glee with which they do this works well.”

Children bond with the robot. “Some bring little presents, like drawings they’ve made for it. Hospital visits that had been daunting or unpleasant can become something to look forward to.” The children begin to enjoy their learning, and take in more than they would from the medical staff. “In our study the robot was not a second-best alternative, but a better one.”

The angst we generate over adults forming relationships with robots seems not to be applied to children. Consider the role of dolls, imaginary friends and such like in normal childhood development. To start worrying about kids enjoying friendships with robots seems, to me, perverse. Why, then, am I so anxious about it in adult life?

“I don’t see why having a relationship with a robot would be impossible,” says Belpaeme. “There’s nothing I can see to preclude that from happening.” The machine would need to be well-informed about the details of your life, interests and activities, and it would have to show an explicit interest in you as against other people. Current robots are nowhere near this, he says, but he can envisage a time when they might be.

Belpaeme’s ethical sticking point would be the stage at which robot contact becomes preferred to human contact. But in truth, that’s not a very high bar. Many children already trade many hours of playing with their peers for an equivalent number online with their computers.

The elements of companionship

In the end, of course, the question is not whether I want a robot companion to care for me, but if would I accept being cared for by a robot. There are cultural considerations here. The Japanese, for example, treat robots matter-of-factly and appear more at ease with them. There are two theories about this, according to Belpaeme. One attributes it to the Shinto religion, and the belief that inanimate objects have a spirit. He himself favours a more mundane explanation: popular culture. There are lots of films and TV series in Japan that feature benevolent robots that come to your rescue. When we in the West see robots on television, they are more likely to be malevolent.

Companionship, to my mind, incorporates three key ingredients: physical presence, intellectual engagement and emotional attachment. The first of these is not an issue. There’s my Care-O-bot, ambling about the house, responsive to my call, ready to do my bidding. A bit of company for me. Nice.

The second ingredient has yet to be cracked. Intellectual companionship requires more than conversations about the time of day, the weather, or whether I want to drink orange juice or water. However, artificial intelligence is moving rapidly: in 2014, a chatbot masquerading as a 13-year-old boy was claimed to be the first to pass the Turing test, the famous challenge–devised by Alan Turing–in which a machine must fool humans into thinking that it, too, is human.

That said, the bar is fooling just 30% of the judging panel. Eugene, as the chatbot was called, convinced 33 per cent, and even that is still disputed. The biggest hurdle to a satisfying conversation with a machine is its lack of a point of view. This requires more than a capacity to formulate smart answers to tricky questions, or to randomly generate opinions. A point of view is something subtle and consistent that becomes apparent not in a few hours, but during many exchanges on many unrelated topics over a long period.

Which brings me to the third and most fraught ingredient: emotional attachment. I think this will happen. In the film Her, a man falls in love with the operating system of his computer. Samantha, as he calls her, is not even embodied as a robot; her physical presence is no more than a computer interface. Yet their affair achieves a surprising degree of plausibility.

In the real world, there is–so far–no attested case of the formation of any such relationship. But some psychologists are, inadvertently, doing the groundwork through their attempts to develop computerised psychotherapy. These date back to the mid-1960s when the late Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology, devised a program called ELIZA to hold psychotherapeutic conversations of a kind. Others have since followed his lead. Their relevance in this context is less their success (or lack of it) than the phenomenon of transference: the tendency of clients to fall in love with their therapists. If the therapist just happens to be a robot … well, so what?

The quality and the meaning of such attachments are the key issues. The relationships I value in life–with my wife, my friends, my editor–are emergent products of interacting with other people, other living systems comprising, principally, carbon-based molecules such as proteins and nucleic acids. As an ardent materialist, I am not aware of evidence to support the vitalist view that living things incorporate some ingredient which prevents them being explained in purely physical and chemical terms. So if silicon, metal and complex circuitry were to generate an emotional repertoire equal to that of humans, why should I make distinctions?

To put it baldly, I’m saying that in my closing years I would willingly accept care by a machine, provided thatI could relate to it, empathize with it and believe that it had my best interests at heart. But that’s the reasoning part of my brain at work. Another bit of it is screaming: What’s the matter with you? What kind of alienated misfit could even contemplate the prospect?

So I’m uncomfortable with the outcome of my investigation. Though I am persuaded by the rational argument about why machine care should be acceptable to me, I just find the prospect distasteful – for reasons I cannot, rationally, account for. But that’s humanity in a nutshell: irrational. And who will care for the irrational human when they’re old? Care-O-bot, for one. It probably doesn’t discriminate.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.