Over the last few days, a slew of reporting, inspired by ProPublica, has revealed that it’s actually quite easy, through the programmatic structure of most online advertising, to create ads meant to target those who have espoused racist, antisemitic, or other hateful ideas.
Here’s a quick rundown of the major internet companies, and what has been discovered about their advertising platforms:
On Sept. 14, ProPublica reported that Facebook allowed advertisers to target categories and ideas such as “Jew hater,” “How to burn jews,” and “History of ‘why jews ruin the world,’” based on interest Facebook users had expressed on the social network and terms with which they had used to describe themselves.
While Facebook removed those categories after ProPublica’s investigation, Slate then discovered that there are dozens of other racist, sexist, and xenophobic categories which advertisers could potentially target. It took Facebook less than a minute to approve ads against phrases like “Kill Muslimic Radicals”and “Ku-Klux-Klan,” and Slate found myriad other options, like “Killing Bitches,” “Killing Hajis,” and “Nazi Party (Canada).”
Facebook released a statement yesterday after ProPublica’s report, saying in part:
Keeping our community safe is critical to our mission. And to help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue. We want Facebook to be a safe place for people and businesses, and we’ll continue to do everything we can to keep hate off Facebook.
BuzzFeed discovered similar targeting issues on Google’s AdWords platform, which runs the advertisements you see on Google search results pages. Typing in keyword suggestions (which advertisers use to build their ads and figure out who to target) like ”why do jews ruin everything” led to the system generating more keyword suggestions like “jews ruin the world” and “jewish parasites.” Buzzfeed was also able to build and launch a campaign around the phrase “black people ruin neighborhoods.”
When Quartz attempted to recreate BuzzFeed’s efforts using similar terms, or terms like those used by ProPublica and Slate, no keyword suggestions were returned. Google has since disabled many of the keywords that BuzzFeed tested.
Sridhar Ramaswamy, Google’s senior vice president in charge of ads, told Quartz in a statement:
Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions. That’s not good enough and we’re not making excuses. We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again.
The Daily Beast was able to target similarly derogatory demographics on Twitter. It reported:
Twitter’s advertising platform tells prospective marketers it has 26.3 million users interested in the derogatory term “wetback,” 18.6 million accounts that are likely to engage with the word “Nazi,” and 14.5 million users who might be drawn to “n**ger.”
A Twitter representative told Quartz about the Daily Beast’s report:
The terms cited in this story have been blacklisted for several years and we are looking into why the campaign cited in this story were able to run for a very short period of time. Twitter actively prohibits and prevents any offensive ads from appearing on our platform, and we are committed to understanding 1) why this happened, and 2) how to keep it from happening again.
Snapchat
Quartz checked on Snapchat’s advertising platform to see if we were able to target using similar terms used on the other platforms. We were not able to: It seems that Snapchat’s demography isn’t quite as granular as the other platforms, which are far more text-based than Snapchat, and so it’s likely easier for them to glean what sorts of things its users are sharing than through all the videos and images posted to Snapchat.
Bing
Microsoft’s second-placed search network seems to have a similar problem to its other platforms. When Quartz created a test advertising campaign on Bing Ads, we weren’t able to directly target specifically loaded terms, but searching for just about any phrase in Bing’s “keyword suggestions” generator will generate specific keywords that you might want to try to target instead. Here’s one example, using “Hitler” as the search term:

A representative for Bing told Quartz:
We take steps to ensure our Bing Ads always meet reasonable standards. We are committed to working with partners who share our vision for relevant, impactful brand interaction and respect the integrity of consumer choice.
Yahoo
Quartz attempted to create an ad campaign on Yahoo, but it seems there’s no simple way to create one online without speaking to a representative from Oath (Yahoo’s parent company) first. And presumably fewer people would feel comfortable telling a sales rep the sorts of things they’re targeting than they would inputting them into a computer system. Hopefully.
Microsoft’s professional social network doesn’t seem to let users target based on arbitrary phrases or demographics. Other than geography, these are the only things you can target against on LinkedIn:

The only section that might have the potential for hateful terms would be in “Member groups”—but a cursory search of terms like those used above didn’t reveal many professional hate groups to target on the platform. We did, however, come across this group:

Upon further inspection, however, it seems that this group was set up by a LinkedIn employee trying to see whether they could set up a group with a title like this. Obviously, it worked:

LinkedIn sent Quartz the following statement:
Hate has no place on LinkedIn and will not be tolerated. When we are made aware of such content, we act swiftly to enforce our policy and remove said content. On Friday, a member of our team created a group solely for internal testing purposes and after a brief testing period, we took the group down.