Skip to navigationSkip to content
Ana Kova for Quartz
STATE OF PLAY

The quest to make AI less prejudiced

Helen Edwards
Member exclusive by Helen Edwards for AI’s power problem

In 2016, researchers from Princeton University and the University of Bath made waves in the AI research community with a landmark study. They looked at a common tool used by AI researchers to represent language, derived from a large database of text from the internet, and they found associations that strongly correlated with human biases—including mundane things like the fact that people find flowers more pleasant than bees and that weapons are less pleasant than musical instruments.

They also found associations that we would recognize today as stereotypes: female names are more likely to be associated with family than careers or with arts rather than sciences. And the biased associations they uncovered mapped onto real-world discrimination. Previous research had found that US job candidates with traditionally European-American names were 50% more likely to get job interviews. They were able to replicate that finding, using just the fact that European-American names were more closely related to pleasant words in their data.

These were disturbing revelations for AI. But the researchers’ point was to start a conversation about bias not just in algorithms but in humans. Because prejudice is a human trait, which is dependent on cultural norms and an individual’s actions, addressing bias in AI is not solely a technical challenge.