Skip to navigationSkip to content

Google explains how artificial intelligence becomes biased against women and minorities

  • Dave Gershgorn
By Dave Gershgorn

Artificial intelligence reporter

Published This article is more than 2 years old.

Time and again, research has shown that the machines we build reflect how we see the world, whether consciously or not. For artificial intelligence that reads text, that might mean associating the word “doctor” with men more than women, or image-recognition algorithms that misclassify black people as gorillas.

Google, which was responsible for the gorilla error in 2015, is now trying to educate the masses on how AI can accidentally perpetuate the biases held by its makers. It’s a nice bit of public relations, but also a pretty good overview of simple ways AI programmers can bias their algorithms.

The video outlines three kinds of bias:

Interaction bias: The user (you and me!) biases an algorithm by the way we interact with it. As an example, Google asked users to draw a shoe. Users drew a man’s shoe, so the system didn’t know that high heels were also shoes.

Latent bias: The algorithm incorrectly correlates ideas with gender, race, sexuality, income, etc. This is the idea of correlating “doctor” with men, just because that’s what stock imagery says.

Selection bias: The data used to train the algorithm over-represents one population, making it operate better for them at the expense of others. If image recognition is trained only on white people, they’ll win AI-judged beauty contests.

These aren’t the only mechanisms for AI to be biased, but it’s a good starting point for becoming acquainted with the idea. For a deeper dive, read some of Quartz’ previous coverage on the subject.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.