CHANGING THE FIELD

Congress is worried about AI bias and diversity

While studying artificial intelligence during the 1990s for his Ph.D. at MIT, Charles Isbell broke the software some of his friends were working on.

“I was breaking all of their facial recognition software because apparently all the pictures they were taking were of people with significantly less melanin than I have,” Isbell, now executive associate dean at the Georgia Institute of Technology, told a hearing of Congressional Subcommittee of Information Technology today. “And so they had to come up with ways around the problem—of me.” While the facial recognition algorithm worked for his lighter-skinned peers, it couldn’t recognize his darker complexion. It’s not a unique problem; in 2015, a Google algorithm classified faces of black people as gorillas.

The kind of experience Isbell had as a black man in computer science is one that is increasingly worrying legislators as AI infiltrates the world’s technology and the tech industry continues to be dominated by white men. “What are the biases you have seen, since the lack of diversity [in the field]?” Congresswoman Robin Kelly asked the four men on the panel.

 “Do we need basic research into bias, do we need basic research into some aspect of neural networks?” 

Isbell said his classmates eventually fixed the facial recognition algorithm, developing methods that didn’t rely on the assumptions in the original data. Recent research from the MIT Media Lab maintains that facial recognition is still significantly worse for people of color, however.

“This is not a small thing,” Isbell said of his experience. “It can be quite subtle, and you can go years and years and decades without even understanding you are injecting these kinds of biases, just in the questions that you’re asking, the data you’re given, and the problems you’re trying to solve.”

In his opening statement, Isbell talked about biased data in artificial intelligence systems today, including predictive policing and biased algorithms used in predicting recidivism rates.

“It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again, and in aggregate leads to even more policing of the same areas, creating a feedback loop,” he said. “One can imagine similar issues with determining it for a job, or credit-worthiness, or even face recognition and automated driving.”

Later in the hearing, Congressman Will Hurd asked what kind of funding would be needed for basic research, specifically mentioning funding for studying AI bias.

“Do we need basic research into bias, do we need basic research into some aspect of neural networks? What kind of research should we be funding to raise our game?” Hurd said.

Ian Buck, who runs Nvidia’s datacenter business, said that bias research is only getting more important.

“Certainly as data science becomes more important, understanding the root cause of bias and how its gets introduced and understood is a very important basic research understanding. A lot of this work has been done, it can be dusted off and continued,” Buck said.

home our picks popular latest obsessions search