Skip to navigationSkip to content
A man looks at a demonstration of facial recognition software.
REUTERS/Thomas Peter
Let’s rethink this one.

AI gatekeepers are taking baby steps toward raising ethical standards

Nicolás Rivero
By Nicolás Rivero


From our Obsession

Machines with Brains

AI is upending companies, industries, and humanity.

For years, Brent Hecht, an associate professor at Northwestern University who studies AI ethics, felt like a voice crying in the wilderness. When he entered the field in 2008, “I recall just agonizing about how to get people to understand and be interested and get a sense of how powerful some of the risks [of AI research] could be,” he says.

To be sure, Hecht wasn’t—and isn’t—the only academic studying the societal impacts of AI. But the group is small. “In terms of responsible AI, it is a sideshow for most institutions,” Hecht says. But in the past few years, that has begun to change. The urgency of AI’s ethical reckoning has only increased since Minneapolis police killed George Floyd, shining a light on AI’s role in discriminatory police surveillance.

This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

The Annual Conference on Neural Information Processing Systems will require a “broader impact statement” addressing the effect a piece of research might have on society. The Conference on Empirical Methods in Natural Language Processing will begin rejecting papers on ethical grounds. Others have emphasized their voluntary guidelines.

The new standards follow the publication of several ethically dubious papers. Microsoft collaborated with researchers at Beihang University to algorithmically generate fake comments on news stories. Harrisburg University researchers developed a tool to predict the likelihood someone will commit a crime based on their face. Researchers clashed on Twitter over the wisdom of publishing these and other papers.

“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute. Scientists have an obligation to think about applications and consider restricting research, she says, especially in fields like facial recognition with a high potential for misuse.

The impacts of academic ethics requirements are likely to extend beyond the ivory tower.

“There is just about no buffer between what happens at conferences and what happens in real life,” says Emily M. Bender, a computational linguistics professor at the University of Washington. Much of the work published at conferences comes from industry labs, not academia, she points out. Plus, a movement toward open-source code means that tools “could be picked up by an enterprising startup and funded by venture capitalists who are not very well versed, perhaps, in thinking through [ethical] issues.”

Bender, Raji, Hecht, and others have called for safeguards to head off perilous AI applications. Hecht, who helped lay the groundwork for for impact statement requirements in 2018, argued that proposals for research funding should include a similar section on potential harms. Bender has called for statements that explain the biases embedded in the data used to train AI models. Raji and others advocated for transparency about models’ performance across demographic lines like race, gender, or age.

Ultimately, Hecht says the goal is establish incentives to perform ethical research. Because publications have a direct impact on researchers’ careers, Hecht says changing paper standards could bend the direction of AI research.

“If you’re having to present work that over and over again has negative impacts that are made transparent though that statement,” Hecht says, “I think that’s going to affect people’s decisions.”

Subscribe to the Daily Brief, our morning email with news and insights you need to understand our changing world.

By providing your email, you agree to the Quartz Privacy Policy.