Skip to navigationSkip to content


Our home for bold arguments and big thinkers.

The hands of two colleagues in a meeting at work
Zackary Drucker / The Gender Spectrum Collection
The human-centered approach that can combat algorithmic bias.

“Color-blindness” is a bad approach to solving bias in algorithms

Jessie Daniels
By Jessie Daniels

Faculty fellow at Data and Society

The rise of “ethical AI” is in the headlines. Among the principles put forward for ethical AI are diversity in hiring and “avoiding bias” in machine learning. But there is a chasm between these aspirational goals and the reality of the tech world.

To forge an ethical AI, we need to include racial literacy. Racial literacy is a deep understanding of systemic racism and the ability to address racial issues in face-to-face encounters. In the tech world, that means considering race in the initial phase of product development and recognizing the way the broader social world seeps into technological design, infrastructure, and implementation to unintentionally reproduce racism. While some argue that the highest ethical standard in technology is to be color blind, neither research nor experience bear this out.

For example, when Stanford launched its new institute for Human-Centered Artificial Intelligence (HAI) in March, the goal was to address the bias in AI. But it seems to have simply replicated racial bias in the hiring of its faculty. Of the 121 faculty members initially announced as part of the institute, more than 100 appear to be white. If even Stanford can’t get it right in an institute explicitly intended right AI’s wrongs, we need to rethink how we’re approaching the problem entirely.

And it’s not just people perpetuating racial bias. The algorithms that are at the center of AI reproduce existing inequalities, too. As researcher Safiya Noble explains in Algorithms of Oppression: How Search Engines Reinforce Racism, in searches for “gorillas,” the top image results are pictures of black people. When typing in the phrase “why are black women so,” Google offered autocomplete suggestions such as “angry” and “loud.” Ideas about race gets embedded into search algorithms because they are baked into our society and into the data that those algorithms draw upon.

It’s also easy to see this kind of racial bias in the AI behind online ads, as Latanya Sweeney discovered. When a colleague of the professor in residence at Harvard University typed in Sweeney’s name to a search engine, the ad that popped up said: “Latanya Sweeney. Arrested?” Sweeney had never been arrested, and wondered why such an ad appeared for her. So she began a systematic study of the algorithms behind online advertising.

What she found was that first names are racialized. In other words, first names like Geoffrey, Jill, and Emma are more likely to be given to white babies, and first names like DeShawn, Darnell, and Jermaine are more likely given to black babies. She analyzed thousands of online ads generated by first-name searches and found that when the first name was associated with being black, the advertisements that appeared suggested an arrest record in 81% to 86% of ads served on Reuters and 92% to 95% on Google, while those with names related primarily to being white did not. It was the racial bias of the algorithm associating her first name, Latayna, with arrest that had created the results Sweeney had witnessed with her colleague.

To be sure, the tech industry has made attempts at addressing bias. This has mostly been through implicit bias trainings. These trainings use a computer-assisted “implicit association test” (IAT) that measures the strength of associations between groups of people (e.g., black people) and evaluations (e.g., good, bad) or stereotypes (e.g., athletic, clumsy). The IAT consistently demonstrates that we are all more biased than we’re comfortable acknowledging, but after two decades, the promise of implicit bias as a solution to racial bias has not paid off.

The notion that our brains are “hard-wired” for bias leaves us in a kind of cul-de-sac, unable to escape the programming of our minds. If we want a truly ethical AI, we need a different approach, one that looks to ways we can build the skills we need in order to address racial bias in tech.

Being “color blind” isn’t being racially literate

Some people say that the ethical way forward in technology is to adopt race-neutral strategies. However, neither research nor relevant experience in the tech industry supports this as a way out of unintended racial bias.

There is another way to teach people who create technological innovation to anticipate racial bias in AI. If people at levels in the tech industry were to ask basic racial-literacy questions, then these unanticipated outcomes might be more predictable. Such questions include:

  • How might racial bias influence the technology we are developing?
  • What are the already existing racial structures that might be affecting the design process?
  • How does the racial composition of our team shape the way we think about how the technology gets used?

Racial literacy could have helped Allison McGuire and Daniel Herrington, the app developers from Washington, DC who designed an app called SketchFactor. This app allows users to report on what they think are sketchy parts of town so that other users can navigate around them. The app essentially crowdsources fear, and that fear is racialized.

The issue here is what one researcher calls “teleogical redlining.” In other words, the app reinforces the idea that some people, specifically black people and the places they live, are inherently more dangerous than white people and they places they live. While these technologies are presented as race-blind, value-neutral solutions to the needs of consumers, in fact, they map onto and reinforce patterns in housing, policing, and health that are profoundly racialized.

We need racial literacy for deciphering propaganda online, too. When the Russian government launched an intelligence operation to undermine US elections, a key part of their strategy was exploiting American racism. In a report analyzing the 3,500 ads bought by the Russian Internet Research Agency that appeared on Facebook, Shireen Mitchell Suri found that a majority of these propaganda pieces focused on themes of black identity and culture. These ads were specifically targeted to discourage participation by black voters, while boosting voter turnout of white voters.

Researchers at the University of Washington found similar results in their analysis of Russian-created propaganda. They found systematic patterns of forged profiles, including contrasting “proud African American” and “the proud white conservative” as political identities. To the extent that these kinds of propaganda strategies are effective, they play on a kind of racial naivete at best and, at worst, a persistent reluctance to face reality.

Increasing racial literacy will certainly help with what one former Facebook executive called the “black people problem.” “The widespread underrepresentation of faces of color in tech is already alarming,” says Mark S. Luckie, who recently left the social-media company, but not before he issued a public memo on the lack of attention to racial issues at the company. Luckie contends that Facebook is failing black employees and black users, who are often overrepresented as users but make up only 4% of the company’s workforce.

And it’s not just Facebook. Luckie also worked at Twitter, and when he left that company in 2015, he wrote a similar send-off about his time there. Luckie makes a valid point that the monoculture of tech firms shapes the platforms and does a disservice to users and employees.

The issue with “the pipeline issue”

Racial literacy would help us understand the fault in one of the most common responses to arguments like Luckie’s, which is the pipeline issue.

Stated plainly, this is the idea that there are not enough black people graduating with computer science and other relevant degrees to work in tech. But this is simply not true. Black people get the degrees—they just don’t get hired.

At every post-secondary education level, the percentage of black people with STEM degrees is greater than the percentage of black workers at major tech firms. Among STEM graduates with bachelor’s or advanced degrees, 57% are white, 8% are Hispanic, and 6% are black, according to American Community Survey data. But technical workers at Google, Microsoft, Facebook, and Twitter, according to the companies’ diversity reports (paywall), are on average 56% white, but only 3% Hispanic, and 1% black. The pipeline argument takes the burden off tech companies to do anything about the kinds of issues Luckie raises.

If we don’t want to reproduce racism in and through tech, we need a more proactive and thoughtful way to counter it further upstream in the process. That means people of all racial and ethnic backgrounds working in the tech industry need racial literacy as a core skill set for ethical AI.

Industry leaders, policy makers, and workers who are born in the US or western Europe need racial literacy to become fluent in the difficult discussions of racial inequality. If they’re not, they will inadvertently adopt the worst aspects of the dominant white culture.