Facebook says it has a tool to detect bias in its artificial intelligence

Facebook announced the anti-bias tool at F8, its annual developer’s conference.
Facebook announced the anti-bias tool at F8, its annual developer’s conference.
Image: Reuters/ Stephen Lam
We may earn a commission from links on this page.

Tech companies relying on artificial intelligence that allows them to serve billions of users are slowly acknowledging that those same algorithms can be biased against those who aren’t white, wealthy, or male.

Facebook announced Wednesday (May 2) that it’s testing a tool called Fairness Flow, an internal project that allegedly can determine whether a machine learning algorithm is biased, meaning it systematically provides certain groups worse results along lines of race, gender, or age. The tool could have a huge impact at a company like Facebook, which faces constant scrutiny from lawmakers and academics about bias introduced into its platform by algorithmic decision-making.

The tool’s first test focused on Facebook’s jobs algorithm, which matches job seekers to companies looking to hire, according to a Facebook spokesperson. Hiring algorithms can be especially subject to bias, since hiring practices today can already suffer from unconscious preferences towards men or those with white-sounding names. Facebook didn’t provide a reason for using the tool on its jobs algorithm first.

Fairness Flow analyzed the diversity of data used to train the jobs algorithm, as well as the quality of recommendations for men, women, and people over and under 40, Facebook research scientist Isabel Kloumann said at Facebook’s F8 developers conference yesterday (May 2).

Facebook didn’t say whether it found evidence of bias or made any changes to the tool.

The Facebook Flow team worked with outside organizations including Stanford, the Center for Social Media Responsibility, the Brookings Institution, and Better Business Bureau Institute for Marketplace Trust to develop the tool, according to the Facebook spokesperson. A spokesperson for Brookings said Facebook was invited to participate in a series of roundtables that it arranged for a forthcoming paper on algorithmic bias, and Brookings did not work specifically on the Facebook tool. The Institute for Marketplace Trust, which is funded in part by Facebook, said there were numerous calls and emails between BBB and Facebook after those roundtables. The Facebook spokesperson confirmed that insight from those roundtables informed how the tool was built.

The tool is still in its early stages of development, the Facebook spokesperson said, and the team is talking with other internal teams to see how it can be implemented elsewhere.