How to prevent human bias from infecting AI

Hand in hand.
Hand in hand.
Image: Reuters/Hannah McKay
We may earn a commission from links on this page.

Artificial intelligence is already an integral part of our lives. From Google Maps to Alexa, AI makes our lives more convenient. Less visibly, AI has helped streamline operations across a range of industries by automating mundane tasks.

But as AI expands its applications, many have expressed concerns that it will exacerbate existing inequalities. Numbers from the World Economic Forum suggest that more than half of the 1.4 million US workers expected to be affected by tech disruption will be women. And there are questions about ways that algorithms reflect human biases.

But speaking on a panel at Advertising Week Europe 2018 in London (which Quartz moderated), three experts laid out how AI could become the force for good. The panelists argued that the crux of the problem with bias in AI is the human element.

“Machine learning isn’t bias. Machine learning does the thing that you tell it to do,” said Vince Lynch, CEO of IV.AI, a company which makes personalized AI for firms. “So you can take a small bias that’s happening inside this pocket that the humans had thought out to begin with, and it can become amplified through the machine learning model. So at the end it looks like it’s real, and the AI, which is in the middle, can get blamed.”

Russ Shaw, an angel investor and the co-founder of Tech London Advocates & Global Tech Advocates, agreed. “It is a real issue,” he said. “We live in a biased society.” But AI can help address the problem, he argued: “There are steps we can take now to address some of the biggest concerns about an AI-enabled future. Let’s increase the diversity of AI coders to remove unconscious bias from algorithms; let’s introduce regulation to ensure the technology is fair and safe; and let’s upskill the population to ensure people can make the most of the jobs AI will create, rather than replace.”

Risk and reward.
Risk and reward.
Image: Quartz

That’s easier said than done. Human bias in hiring has been well-documented, with studies showing that even with identical CVs, men are more likely to be called in for an interview, and non-white applicants who “whiten” their resumes also get more calls.

But of course, AI is also not immune to biases in hiring either. We know that across industries, women and ethnic minorities are regularly burned by algorithms, from finding a job to getting healthcare. And with the greater adoption of AI and automation, this is only going to get worse.

“We have to acknowledge that if you are of BAME [black, Asian, or minority ethnic] origin, you’re twice as likely to be unemployed than a white person in the UK today, with the same skill set,” said Tabitha Goldstaub, co-founder of AI community group CognitionX. “The challenge is then, how the hell do we work out how less and less jobs, that this doesn’t create more of a divide in society.”

Panelists argued that the more we apply AI, the more effective it will be at eliminating human bias. But it won’t happen overnight.

“Like most forecasts involving technology, people tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run,” added Steven Wolfe Pereira, chief marketing and communications officer

 at AI tech group Quantcast. “The same thing can apply to inequality in the workforce. While we can get excited about AI’s potential today, the reality is the use cases are very limited and basic—like using machine learning to analyze résumés to removed unconscious bias.”

It’s a bit of a chicken and egg situation—we need a diverse workforce to make sure the data feeding AI isn’t skewed, but AI isn’t yet effective at bringing more women and ethnic minorities into tech.