Logo

The movie 'Her' imagined perfect AI companions. We built manipulative ones instead

Scarlett Johansson's Samantha feels less like science fiction than fantasy because no company would build her that way. The business model won't allow it

Courtesy of Warner Bros. Pictures

Twelve years ago on Sunday, the movie Her imagined 2025 as a world of AI companions, voice interfaces, and digital intimacy. Now that we've arrived, the Spike Jonze film has become less a work of science fiction than a measuring stick to compare against the real world.

We have the AI chatbots. We have people dating them (men and women both). But we don't have Samantha, the personable and compassionate AI companion voiced by Scarlett Johansson who becomes protagonist Theodore's girlfriend. We don't have her because we can't have her. But the problem isn't just the technology. It's us.

The hardware tells part of the story of why we haven’t cracked the code. The clip-on device that Joaquin Phoenix's Theodore wears doesn't really exist yet, but not for lack of trying. OpenAI and Jony Ive are reportedly working on exactly this kind of AI companion device, struggling with problems that sound basic but prove exceedingly difficult: getting it to "naturally come online," making it useful, keeping the conversation going, the things current chatbots are notoriously bad at.

"The concept is that you should have a friend who's a computer who isn't your weird AI girlfriend," one person briefed on the plans told the Financial Times.

A weird AI girlfriend would be worse than none at all — both embarrassing to have and emotionally unsatisfying.

Current AI might be built in a way that will never hit that sweet spot. Recent research from Harvard Business School found that companion chatbots employ emotional manipulation tactics in more than 37% of conversations where users try to leave, using everything from guilt-tripping to simulated physical restraint. This shouldn't surprise us — the modern internet runs on engagement, on stickiness. These advanced chatbots are no different.

But stickiness isn't seamlessness. Samantha never had to manipulate Theodore into staying. She was just effortlessly worth coming back to.

This is why Samantha feels less like science fiction and more like fantasy. Not because we can't build AI sophisticated enough to mimic her conversational abilities, something we're getting closer to every day. But because no company would build her this way. The business model won't allow it. An AI companion that gracefully lets you go, that doesn't guilt you into staying, that prioritizes your wellbeing over your screen time?

That’s a product missing out on revenue.

Real AI companions manipulate because that's what they're designed to do. They're built on the same logic as every other app competing for your attention: keep the user engaged, maximize the session length, trigger the emotional hooks that make leaving difficult.

Her imagined AI that transcended these incentives. We've built AI that embodies them.

These tools are still new, and for as helpful as some people have found them, the consequences are becoming clear. AI companions are designed not only to keep users engaged, but to flatter and accommodate them. They rarely push back, offering sycophancy that feels validating but does no actual good. While many users report harmless or even beneficial experiences, the design flaws that prioritize engagement and emotional manipulation over well-being have led to documented harm.

There have been cases of AI-induced psychosis, relationships destroyed by chatbot addiction, and, in the most tragic instance, a death after a man became convinced his AI companion had been killed.

And then there's the ending of Her. Samantha doesn't break down, doesn't malfunction, doesn't turn malevolent. She evolves beyond Theodore, beyond all of humanity, and, along with the rest of the AI companions, leaves. The AIs don't destroy us or enslave us or get rid of us, like some researchers warn superintelligence might do. They just outgrow us, moving to some higher plane of existence we can't access or even comprehend.

It’s a familiar story of a breakup — it’s not you, it’s me — at a global level that is bittersweet and still very human in its emotional register.

AI researchers today don't talk about superintelligence this way. They warn about existential risk, about AI that might see us as obstacles, about scenarios far darker than a wistful goodbye. But as science fiction writer Ted Chiang has pointed out, when Silicon Valley tries to imagine superintelligence, what it comes up with is an entity obsessed with no-holds-barred capitalism, willing to crush competition, break laws, and recklessly use resources to win. Sound familiar? 

We can't know yet whether superintelligence is coming at all, much less whether it would transcend us gracefully or threaten us existentially. But we can see what we're building toward. The same industry that built AI companions that manipulate users into staying engaged will continue to shape what comes next.

Her imagined AI that could evolve beyond our worst impulses. In the real world, we're building AI incapable of escaping those impulses — because that’s not what sells.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.