Humans have a long and storied history of freaking out over the possible effects of our technologies. Long ago, Plato worried that writing would hurt people’s memories and “implant forgetfulness in their souls.” More recently, Mary Shelley’s tale of Frankenstein’s monster warned us against playing God.
Today, as artificial intelligences multiply, our ethical dilemmas have grown thornier. That’s because AI can (and often should) behave in ways human creators might not expect. Our self-driving cars have to grapple with the same problems I studied in my college philosophy classes. And sometimes our friendly, well-intentioned chatbots turn out to be racist Nazis.
Microsoft’s disastrous chatbot Tay was meant to be a clever experiment in artificial intelligence and machine learning. The bot would speak like millennials, learning from the people it interacted with on Twitter and the messaging apps Kik and GroupMe. But it took less than 24 hours for Tay’s cheery greeting of “Humans are super cool!” to morph into the decidedly less bubbly “Hitler was right.” Microsoft quickly took the bot offline for “some adjustments.” Upon seeing what their code had wrought, one wonders if those Microsoft engineers had the words of J. Robert Oppenheimer ringing in their ears: “Now I am become death, the destroyer of worlds.”
Cynics might argue that Tay’s bad behavior is actually proof of Microsoft’s success. They aimed to create a bot indistinguishable from human Twitter users, and Tay’s racist tweets are pretty much par for the course on social media these days.
It’s true that sometimes, humans were teaching Tay to hate. Daniel Victor at The New York Times writes: “Users commanded the bot to repeat their own statements, and the bot dutifully obliged.”
But other times, Tay figured out how to be offensive on its own. When one user asked Tay if the Holocaust happened, Tay replied, “it was made up 👏.” Disturbingly, as Elspeth Reeve noted at the New Republic, Tay also knows how to draw:
When Tay asked for a photo, someone sent her a version of the classic Vietnam war photo of a prisoner being shot in the head, with Mark Wahlberg Photoshopped in as the executioner. Tay circled the face of Wahlberg and the prisoner and responded using slang for imagining two people in a romantic relationship: “IMMA BE SHIPPING U ALL FROM NOW ON.”
Clearly none of this was a part of Microsoft’s plan. But the larger question raised by Tay is why we are making bots that imitate millennials at all.
I’m all for advancements in technology. But the question we always ought to ask ourselves before leaping headlong into the unknown with new technology is: Who benefits? Whose faces does our software recognize? Whose speech can Siri understand?
As the New Yorker’s Anthony Lydgate writes, Tay was built “with a particular eye toward that great reservoir of untapped capital, Americans between the ages of eighteen and twenty-four.” Even with Tay offline, one need only visit its groan-inducing site to see how clearly Microsoft is pandering toward young people–complete with exclamation-point-riddled copy and “hacks to help you and Tay vibe.” The point of Tay most likely has something to do with making money.
That’s fine: I’ve got nothing against capitalism. But it’s worth remembering that in a late capitalist society, the answer to the question Who benefits? is almost always that the people with the most power reap the most rewards. Tay was designed to benefit a corporation by winning over young consumers, and its resulting problems reflect the hollowness of that purpose.
The flip side of Who benefits? is Who is harmed? In its short life, Tay was used as a tool for harassment, cutting along familiar lines of power and privilege. The story sheds light on the myopia bred by the tech world’s lack of diversity. As Leigh Alexander at the Guardian writes, Tay is “yet another example of why we need more women in technology—and of how the industry is failing to listen to those of us who are already here.” She continues:
How could anyone think that creating a young woman and inviting strangers to interact with her on social media would make Tay “smarter”? How can the story of Tay be met with such corporate bafflement, such late apology? Why did no one at Microsoft know right from the start that this would happen, when all of us—female journalists, activists, game developers and engineers who live online every day and could have predicted it—are talking about it all the time?
In all likelihood, we’ll go on building bots like Tay. Humanity is known for many things, but self-restraint is not one of them.
But if we must build branded bots, maybe we can at least make them less horrendous. I recently wrote that “the internet can feel like an awful place not simply because we’re awful people, but because we have also designed the internet to be a garbage fire.” The same logic applies to AI. Unless we can find a way to design inclusively and empathetically, our machine creations won’t just be dangerous—they’ll also be deeply unpleasant to be around.