To fix algorithmic bias, we first need to fix ourselves

Shifting the blame from person to machine.
Shifting the blame from person to machine.
Image: Reuters/Michael Dalder
We may earn a commission from links on this page.

In 2014, two 18-year-old girls found an unlocked kid’s bicycle and a Razor scooter in a residential area of Fort Lauderdale, Florida. Sade Jones and Brisha Borden decided to ride them a couple of blocks, laughing at how big they felt on the tiny frames. Hours later, they were thrown in jail and charged with robbery and petty theft. According to ProPublica, they were not locked up because they had “borrowed” their rides—they were locked up because COMPAS, a proprietary AI system designed by Northpointe corporation for predicting recidivism, rated Borden at a high risk and Jones at medium risk of reoffending within the next two years.

It did so, ProPublica claimed, because the girls were black.

Modern risk-assessment tools have been used in criminal justice system since the early 2000s. They were intended to reduce the rate of incarceration, and they fared fairly well in doing that. In Virginia, one of the first states to use such algorithmic assessments, prison population growth slowed down from 31% to 5% within a decade. But few organizations have taken a closer look inside the AI’s black box to see how things work and whether they are discriminating against minorities.

Northpointe claims the proportion of people who reoffend within each risk category is more or less the same regardless of race, but after analyzing risk scores of more than 7,000 defendants, ProPublica found that black people were twice as likely to be flagged as medium or high risk than white people.

“There might have been a dream that we could escape bias and unfairness by simply leaving decisions to “The Algorithm” and taking the actual act of deciding out of human hands—but this is precisely what these results show to be impossible,” says Thomas Miconi, a researcher in computational neuroscience at the Neurosciences Institute. “We cannot offload our moral compass to the machines.”

Miconi recently published a paper somewhat gloomily titled “A Note On The Impossibility of Fairness.” In it, he argues that fairness is a subjective human trait—but to an algorithm, it’s mathematically impossible.

We can’t make fair, intelligent machines because we are inherently unfair ourselves. In the US, minorities constitute an overwhelming majority of the prison population; the sum of money earned by men also exceeds the sum earned by women. If we can’t get it right with humans, how are we meant to train machines to do a better job?

For philosopher Immanuel Kant, injustice was a state of nature. In The Metaphysical Elements of Justice, he wrote that even if we all were perfectly moral and righteous, there would be no justice anyway, because every one of us would stick to his or her own subjective interpretation of what is or is not fair: “Even if we imagine [people] to be ever so good natured and righteous before a public lawful state of society is established, individual men, nations and states can never be certain they are secure against violence from one another because each will have the right to do what seems just and good to him, entirely independently of the opinion of others.” Kant was a mathematician at heart—just like Miconi and many others trying to figure out why a machine that should have been fair by design effectively threw two teenage girls into jail for something that deserved a reprimand at the most.

Unfairness is a property of our reality, embedded in the very fabric of the universe, and it will stay with us forever. “No matter who, or what, does the deciding, there must always be trade-offs, and someone has to decide which trade-off to choose,” Miconi says. Sounds pretty Kantian, doesn’t it?

Miconi says there are two ways to build a perfectly fair machine. First, we would need to come up with a perfect predictor; the ultimate COMPAS algorithm imbued with godlike foreknowledge that is always right in its predictions. Northpointe says COMPAS accuracy currently stands at around 76%, but ProPublica claims it’s sometimes no better than flipping a coin.

Realistically though, godlike foreknowledge is something we’ll never achieve. But there is one more way. A machine can be perfectly fair when there is no difference in prevalence of the predicted condition between different groups. COMPAS is more likely to flag black people as risky because they indeed do reoffend more often (pdf). Why? This is something that can’t be fixed by fair machines—but it can be fixed by fair people.