Facebook built its lucrative advertising enterprise by showing businesses’ ads to just the right set of potential customers. But a new academic finding threatens the heart of that, showing that Facebook’s algorithms can steer some job ads in ways that are discriminatory—even when the advertisers weren’t trying to reinforce stereotypes about gender in the workforce.
The group of researchers, from Northeastern University, the University of Southern California and digital civil rights advocacy group Upturn, ran ads advertising jobs openings in the lumber industry and for preschool teachers to gender-balanced audiences. Facebook nevertheless mostly led men to the lumber ads and mostly showed women the preschool teacher ads.
This browser does not support the video element.
Facebook offers granular options for advertisers to choose who should see their ads, ensuring a brand like Huggies could send its ads to new parents and political candidates can ask for money from their supporters. But even after advertisers choose everyone who could potentially see their ad, Facebook opaquely chooses who actually does, based partially on how likely Facebook’s artificial intelligence algorithms predict that each user is going to click on it.
Facebook said in a statement, “We’ve been looking at our ad delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic—and we’re exploring more changes.” It said its recent changes meant to combat potentially-discriminatory choices by advertisers were “only a first step.”
In a separate finding of the study, ads for houses for sale—again targeted to have identical potential audiences— were directed mainly to white people, while ads for rentals were shown primarily to black people.
Automated but unintentional discrimination was a perhaps inevitable consequence of the tech industry’s favorite formula for maximizing “engagement.” Algorithms like this are designed to show you similar content to whatever content that “people like you” have read or clicked on or bought or listened to, and to do that for all content. That can be helpful when it leads to automatically advertising razor blades to new razor purchasers, or recommending Ariana Grande songs to Justin Bieber fans.
But using these algorithms on things that are highly regulated (like job ads) and “people like you” replicates sensitive offline groupings, like race and gender, could send the modern web’s fundamental formula toward a reckoning. When an algorithm “learns” a pattern that more men than women are interested in lumber industry jobs (even if it doesn’t know their gender and learns that by correlating other information about a person’s likes and habits), then what the system is doing is deciding not to show those job ads to other women, solely because they’re women. That’s problematic, recreating stereotypes, “boys’ clubs” and societal barriers that have existed long before software.
Facebook has already settled a lawsuit for a separate issue: offering discriminatory options for targeting ads to advertisers who chose to show ads for, among other things, sausage-making jobs just to men. It agreed to remove those options in March. And just last week, Facebook was sued by the US Department of Housing and Urban Development, both for offering discriminatory targeting options and for the automated discrimination that this paper shows can exist.
In legal documents related to those lawsuits, Facebook described its advertising system as a “neutral tool”—a claim that’s challenged by this research paper. If Facebook is contributing to discriminatory advertising on its platforms, that could endanger its legal immunity under a US law foundational to the internet—section 230 of the Communications Decency Act—that protects internet businesses from being sued over the illegal activity of their users.
The researchers carefully constructed their study to try to discover whether Facebook’s algorithmic decisions were the cause of the gender-biased audiences for ads. To test this, they ran ads with images of either stereotypically male or female things, but with the images made almost entirely transparent; they’d appear all white to humans, but computers could still “see” the underlying image. Facebook still steered the ads with these modified images to specific audiences: ones containing, for instance, football went to men, and makeup supplies to women. That effect could not have occurred based on human reactions, since the photos appeared the same to humans.
The researchers caution that they haven’t been able to prove that Facebook’s algorithm would affect how any job ad was disseminated, since they can only see the results for their own ads, with potential audiences created under their carefully controlled conditions. While ads for some jobs they created were delivered to a roughly even mix, like artificial intelligence developers and lawyers, those for the preschool teachers, janitors and others had a skew worse than three people of one gender to one of the other. Ads for jobs in the lumber industry were seen by more than nine men for each woman.
To measure the difference between who could’ve seen their job ads and who actually did, the researchers constructed their ads’ potential audience to be evenly split between specific men and women in North Carolina (using public records that includes gender), then used Facebook’s existing tools to see the gender breakdown of who actually saw the ads.
Facebook calculates the gender breakdown for the audience of every ad in its system, but only shares that data with the advertiser. So, how often does Facebook’s ads system transform ordinary job ads into discriminatory ones? The public has no way to find out.