Skip to navigationSkip to content
STATE OF PLAY

The quest to make AI less prejudiced

Ana Kova for Quartz
  • Helen Edwards
By Helen Edwards

Founder of Sonder Scheme

In 2016, researchers from Princeton University and the University of Bath made waves in the AI research community with a landmark study. They looked at a common tool used by AI researchers to represent language, derived from a large database of text from the internet, and they found associations that strongly correlated with human biases—including mundane things like the fact that people find flowers more pleasant than bees and that weapons are less pleasant than musical instruments.

They also found associations that we would recognize today as stereotypes: female names are more likely to be associated with family than careers or with arts rather than sciences. And the biased associations they uncovered mapped onto real-world discrimination. Previous research had found that US job candidates with traditionally European-American names were 50% more likely to get job interviews. They were able to replicate that finding, using just the fact that European-American names were more closely related to pleasant words in their data.

These were disturbing revelations for AI. But the researchers’ point was to start a conversation about bias not just in algorithms but in humans. Because prejudice is a human trait, which is dependent on cultural norms and an individual’s actions, addressing bias in AI is not solely a technical challenge.

Enrich your perspective. Embolden your work. Become a Quartz member.

Your membership supports a team of global Quartz journalists reporting on the forces shaping our world. We make sense of accelerating change and help you get ahead of it with business news for the next era, not just the next hour. Subscribe to Quartz today.

こちらは英語版への登録ページです。
Quartz Japanへの登録をご希望の方はこちらから。