Skip to navigationSkip to content

The quest to make AI less prejudiced

Ana Kova for Quartz
  • Helen Edwards
By Helen Edwards

Founder of Sonder Scheme


In 2016, researchers from Princeton University and the University of Bath made waves in the AI research community with a landmark study. They looked at a common tool used by AI researchers to represent language, derived from a large database of text from the internet, and they found associations that strongly correlated with human biases—including mundane things like the fact that people find flowers more pleasant than bees and that weapons are less pleasant than musical instruments.

They also found associations that we would recognize today as stereotypes: female names are more likely to be associated with family than careers or with arts rather than sciences. And the biased associations they uncovered mapped onto real-world discrimination. Previous research had found that US job candidates with traditionally European-American names were 50% more likely to get job interviews. They were able to replicate that finding, using just the fact that European-American names were more closely related to pleasant words in their data.

These were disturbing revelations for AI. But the researchers’ point was to start a conversation about bias not just in algorithms but in humans. Because prejudice is a human trait, which is dependent on cultural norms and an individual’s actions, addressing bias in AI is not solely a technical challenge.

Enrich your perspective. Embolden your work. Become a Quartz member.

Your membership supports our mission to make business better as our team of journalists provide insightful analysis of the global economy and helps you discover new approaches to business. Unlock this story and all of Quartz today.

Membership includes:

Quartz Japanへの登録をご希望の方はこちらから。