Scientists used AI to explore the results of the Implicit Association Test

We may earn a commission from links on this page.

Artificial intelligence may not be human, but that doesn’t make it exempt from the kind of bias almost every person displays. That’s because we’ve been building prejudice into our AI, which learns both the good and the bad from human creators.

It’s a problem, scientists say, with a hidden benefit: By trying to understand how machines pick up human bias, we might in turn be able to learn how we acquire those biases ourselves.

In 2017, Joanna Bryson, a computer scientist and AI specialist at the University of Bath, fed around 840 billion words—from tweets, the US Declaration of Independence, Reddit threads, and many other sources—into a purely statistical machine-learning model to see whether it would form biases based on the implicit linguistic patterns it found. Next, she told the machine to create related clusters of words. She compared these clusters to 17 million results from the Implicit Association Test (IAT), which psychologists use to measure subjects’ unconscious prejudice. (The IAT shows people a series of images and words and tests how quickly they associate the two.)

The results were staggeringly similar.

Some of the AI’s biases were innocuous: “Even though it’s a giant spreadsheet that contains a lot of words, it has this knowledge about the fact that, you know, flowers are more pleasant and insects are less pleasant,” says Bryson. Others were more pernicious, such as assuming nurses were female. In keeping with IAT results, the machine preferred European-American names, such as Ryan or Heather, to stereotypically African-American ones, such as Tyrone and Shaniqua.

The IAT doesn’t claim to reveal how we really feel. Instead, it aims to expose biases we may have unwittingly picked up. Television series in which a male character is the primary breadwinner, for instance, might lead audiences to believe that men are more likely to have careers. “And so it’s a little bit easier, a little bit faster, to talk about like men’s names in professional positions than women’s names,” Bryson says, citing The Brady Bunch as one example.

The overlap between the machine-learning model bias and the IAT results led Bryson to question how our brains respond to the onslaught of constant linguistic cues. Along with fellow scientists Aylin Caliskan-Islam and Arvind Narayanan, she published an article with their findings in the journal Science.

It’s possible that, like the machine model, we’re subconsciously processing language all the time—and that it’s this process that creates our biases, she says: “The only reason AI is so powerful in understanding ourselves is the fact that we are predictable. We are algorithmic and we can be explained in these kinds of ways.” 

Bryson now wonders whether the way we use language feeds our prejudices, rather than our prejudices leading to biased language. That suggests a troubling consequence, however—it’s not just a lack of diversity among AI specialists and computer scientists that lead to biased AI, but something much harder to fix.

“Even if you just completely fairly take a big sample of all the words that are out there, you’re going to wind up with this,” she says. “You can get a biased machine, which just absorbed the bias culture right.” Increasing diversity in hiring or working with engineers who are knowledgeable about historical inequalities may not be enough to eliminate machine prejudice altogether.

In the case of learned gender stereotypes in jobs, the study notes, “If we were using machine learning to evaluate the suitability of job applicants, these stereotypes would be bad.” Yet if the machine had been asked to work out whether more men or women worked in these roles historically, it would be absolutely right.

The way we communicate with one another seems to perpetuate these biases—however unintentionally. Our “biased” machines aren’t necessarily doing something wrong: They’re just responding to the data we give them, which are representative of how we speak to one another. Superficial fixes alone can’t solve a larger, structural problem.