Skip to navigationSkip to content
NO JUDGMENT

We can’t address bias in AI without considering power

Handcuffs hang from prison bars.
The most dangerous AI bias is the bias of the more powerful over the less powerful.
Helen Edwards
By Helen Edwards

Founder of Sonder Scheme

Sometimes it takes something unexpected to shift people’s perspectives. That’s what a group of MIT and Harvard Law School researchers were aiming for when they set out to reframe fairness in AI by studying its use on the powerful rather than the powerless. They presented the results of their research in January at the ACM Conference on Fairness, Accountability and Transparency in Barcelona.

In the US, over half a million people are locked up despite not yet having been convicted or sentenced—a result of pretrial detention policies. Ninety-nine percent of the jail growth since 2002 has been in the pre-trial population, much of this because of an increased reliance on bail money, according to a report by the Prison Policy Initiative. As a result, the report’s authors write, “local jails are filled with people who are legally innocent, marginalized, and overwhelmingly poor.”

Using theories borrowed from social justice work, the MIT and Harvard research team built a model to test how personal agency affects the accuracy of AI in this context.

Enrich your perspective. Embolden your work. Become a Quartz member.

Your membership supports a team of global Quartz journalists reporting on the forces shaping our world. We make sense of accelerating change and help you get ahead of it with business news for the next era, not just the next hour. Subscribe to Quartz today.

こちらは英語版への登録ページです。
Quartz Japanへの登録をご希望の方はこちらから。