Skip to navigationSkip to content
NO JUDGMENT

We can’t address bias in AI without considering power

Handcuffs hang from prison bars.
The most dangerous AI bias is the bias of the more powerful over the less powerful.
  • Helen Edwards
By Helen Edwards

Founder of Sonder Scheme

Published This article is more than 2 years old.

Sometimes it takes something unexpected to shift people’s perspectives. That’s what a group of MIT and Harvard Law School researchers were aiming for when they set out to reframe fairness in AI by studying its use on the powerful rather than the powerless. They presented the results of their research in January at the ACM Conference on Fairness, Accountability and Transparency in Barcelona.

In the US, over half a million people are locked up despite not yet having been convicted or sentenced—a result of pretrial detention policies. Ninety-nine percent of the jail growth since 2002 has been in the pre-trial population, much of this because of an increased reliance on bail money, according to a report by the Prison Policy Initiative. As a result, the report’s authors write, “local jails are filled with people who are legally innocent, marginalized, and overwhelmingly poor.”

Using theories borrowed from social justice work, the MIT and Harvard research team built a model to test how personal agency affects the accuracy of AI in this context.

“Just as arrest data tells us more about the police than it does about defendants, we wondered if there would be more information in patterns of judges’ behavior than in the patterns of defendants in pre-trial evaluation for bail,” Chelsea Barabas, one of the authors of the paper, told Quartz.

AI models are all about prediction. The researchers hypothesized that prediction accuracy is a by-product of agency: people who have the power to make their own decisions should be more predictable than those buttressed by countless other complex forces. In the courtroom, that meant hypothesizing that judges’ behavior would be more predictable than defendants’.

That is exactly what they found. The judges’ decisions turned out to be more predictable than those of the defendants.

A key outcome for pretrial risk assessment is estimating whether someone will fail to appear for a future court date.  The researchers flipped this around, developing an alternative prediction of whether a judge would detain the defendant for more than 48 hours, a measure they dubbed “failure to adhere” and treated as a proxy for imposing unaffordable bail without due process of law. Using the same data, but by interrogating it from a different perspective, their “alt-FTA” achieved an accuracy of 80%. Mainstream “FTA” score models are accurate around 65% of the time. Whether the judges would impose excessive detention was more predictable than whether defendants would show up in court.

The researchers stress that their algorithm is not intended for any practical use. Instead, their intention was to demonstrate that for AI to be fair and unbiased, power matters. They wanted to establish a “counter-narrative, a risk assessment that ‘looks up’ at judges,” and subjects “those in power to the very socio-technical processes which are typically reserved for only the poor and the marginalized.”

How might judges feel to see their work reduced to an algorithm—one that didn’t paint them in a favorable light or leave much room for individual circumstance? That, of course, is what sentencing algorithms do to less powerful individuals on a daily basis.

That one aspect of judicial decision-making is more predictable than one aspect of defendants’ decision-making doesn’t really say much on its own. Different phenomena are easier or harder to predict, and that predictability doesn’t always map easily onto power. However, the process of developing the algorithm revealed a dilemma: data that reflect poorly on the powerful may be less likely to see the light of day. And, in fact, the researchers reported varying levels of cooperation from courts throughout their work.

The most dangerous AI bias is the bias of the more powerful over the less powerful. 

This would introduce another source of bias: a lack of transparency would make it more difficult to question the decisions of those who have the power. As the researchers point out: “Of what value is this access if it is contingent upon refusing to question the unchecked assumptions and premises of the data regime itself?”

This last point is important. The most dangerous AI bias is the bias of the more powerful over the less powerful. Fighting bias, more often than not, involves fighting power. If the data sources and structures of AI are not able to be challenged by those external to the process, then there is no true challenge to power and no way to honestly correct for bias. (This is part of why addressing bias in AI requires more than technical fixes.)

A data scientist doesn’t need to be prejudiced to design a prejudiced algorithm. All that is required is that the data scientist conforms to the status quo, including accepting any inherent data bias and any pre-existing imbalance in power structures as an accurate representation of the world. But as the paper’s authors point out, “Data and their subsequent analyses are always the by-product of socially contingent processes of meaning making and knowledge production.”

For AI to be fair, data scientists will need to include the social context in which the algorithm acts.

The researchers’ inspiration came from an unlikely corner: anthropology. In the 1970s, the dominant paradigm in anthropology was to study people at the periphery of western culture. Those who studied had “the relative upper hand,” while those who were studied were “the underdog.” The challenge, which came from scholar Laura Nadar, was to shift the field to study powerful strata in society rather than simply communities at the margins. As a result, scholarship and practice became more methodologically and ethically complex, uncovering hidden assumptions and delivering richer insights. This, in many ways, is the type of fundamental shift we need today in AI.

Tackling bias in AI is most often thought about in narrow technical terms, where the role of the data scientist is to be neutral and apolitical, where complex social problems yield to the processes of data distillation and “objective truths” are fractionated qualities of AI. But if we want AI to help usher in a fairer society, this narrow view of technology, data, and the role of the data scientist will need to change in far-reaching ways.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.