This year the world woke up to the society-shifting power of artificial intelligence

French president Francois Hollande lays out France’s AI strategy.
French president Francois Hollande lays out France’s AI strategy.
Image: Reuters/ Stephane de Sakutin
We may earn a commission from links on this page.

In less than five years, a 2012 academic breakthrough in artificial intelligence evolved into the technology responsible for making healthcare decisions, deciding whether prisoners should go free, and determining what we see on the internet.

Machine learning is beginning to invisibly touch nearly every aspect of our lives; its ability to automate decision making challenges the future roles of experts and unskilled laborers alike. Hospitals might need fewer doctors, thanks to automated treatment planning, and truck drivers might not be required by 2030.

But it’s not just about jobs. Serious questions are starting to be raised about whether the decisions made by AI can be trusted. Research suggests that these algorithms are easily biased by the data from which they learn, meaning societal biases are reinforced and magnified in the code. That could mean certain job applicants get excluded from consideration when AI hiring software is used to scan resumes. Even more, the decision-making process of these algorithms is so complex that AI researchers can’t definitively say why one decision was made over another. And while that may be disconcerting to laymen, there’s an industry debate over how valuable knowing those internal mechanisms really is, meaning research may very well forge ahead with the understanding that we simply don’t need to understand AI.

Until this year, these questions typically came from academics and researchers skeptical of the breakneck pace that Silicon Valley was implementing AI. But 2017 brought new organizations spanning big tech companies, academics, and governments dedicated to understanding the societal impacts of artificial intelligence.

“The reason is simple—AI has moved from research to reality, from the realm of science fiction to the reality of everyday use,” Oren Etzioni, executive director of the Allen Institute for AI, tells Quartz.  The Allen Institute, founded in 2012, predates much of the contemporary conversation on AI and society, having published research on ethical and legal considerations of AI design.

Here’s a quick chronological list of 2017’s entrants to the conversation:

  • Ethics and Governance of Artificial Intelligence Fund, founded January 2017 with investment from Reid Hoffman, Omidyar Network and the Knight Foundation to research ethical AI design, communication, and public policy.
  • Partnership on AI, founded September 2016 but announced first initiatives May 2017. Industry-led organization to form best-practices for ethical and safe AI creation, with founding members Amazon, Apple, Facebook, Google, IBM, and Microsoft. More than 50 other companies have joined since founding.
  • People and AI Research, founded July 2017 by Google to study how machines and humans interact.
  • DeepMind Ethics & Society, founded October 2017 to study AI ethics, safety, and accountability.
  • AI Now Institute, founded November 2017 by Microsoft’s Kate Crawford and Google’s Meredith Whittaker. Meant to generate core research on societal impacts of AI.
  • Proposed US Department of Commerce committee on AI, a bill drafted December 2017 by Senator Maria Cantwell suggests wide-reaching recommendations on how to regulate artificial intelligence.

These organizations aren’t just computer scientists and tech executives. The Partnership on AI board touts the executive director of ACLU Massachusetts and a former Obama economic adviser. A California Supreme Court justice sits on the AI Now Institute’s advisory board, alongside the president of the NAACP Legal Defense Fund.

The trend of organizing comes at a time when marketing around AI has reached a fever pitch. While many researchers bemoan the claims being made about AI’s use and efficacy as “hype,” the companies paying new AI hires high-six-figure salaries are often the ones pushing the narrative. Google and Microsoft are rebranding themselves as “AI-first,” as an attempt to show that their core services are all powered and personalized by machine learning. Technologists like Andrew Ng have preached that AI is the new electricity, and Elon Musk tweets regularly about the danger of AI despite it being a cornerstone of his business.

Some worry that these organizations, like the Partnership on AI, will be used as a way for large technology companies to skirt government intervention by enacting softer, self-imposed regulation amongst themselves. In a contributed article to Quartz, lawyer Jacob Turner writes that having Google and Facebook regulate AI would be like tobacco companies regulating cigarettes. While the PAI doesn’t consider itself to be a regulatory body, other organizations like IEEE have released their versions of best practices for ethically-aligned AI, which stress design factors like asking users for consent to analyze their data for future learning.

Right now those are the two extremes of self-regulation in AI; either suggestions so cautious that they only sniff at the edges of regulation, or dense, technical prose weighing hundreds of pages. The only ombudsmen of AI today are the media and outspoken academics or corporate researchers who typically react to specific academic or industrial faux pas.

Despite conflicts, self-regulation is often a  precursor to codified law and oversight, says Gary Marchant, professor at Arizona State University and co-chair of the upcoming Conference on Artificial Intelligence, Ethics, and Society.

“The practical reality is that in many emergent technologies like artificial intelligence, [self-regulatory bodies] are going to be essential, but they are problematic and challenging in many ways,” Marchant says. “Right now, we really don’t have a lot of good options in terms of traditional regulation.”

Marchant points to self-regulation of another emergent field, nanotechnology. In the early aughts, DuPont created the Nano Risk Framework alongside the Environmental Defense Fund, to make sure nanotechnology is safe for humans and the environment throughout its entire lifecycle.

Despite big tech’s attempts at self-regulation, government interference seems to be on the horizon. Cantwell’s proposed committee would suggest regulations for AI technologies within a year and a half. More narrowly, a Senate committee has sent self-driving vehicle regulation to a vote, which would set limits on the number of autonomous vehicles that can be sold and mandates safety matching current cars.

Despite this movement, the US might find itself behind other parts of the world, in terms of an active governmental role in strategy or regulation. France introduced a comprehensive AI strategy in January 2017, with a commission to create rules on ethics and privacy of user data. Canada, which has long fostered AI expertise, invested $125 million for the development of AI in March 2017, though experts argue that regulation is being lost in the frenzy to fund the technology. And China has laid out a plan to surpass the US and build a $150 billion industry by 2030, and focuses less on regulation than areas the government would like to see progress.

If 2017 was the year where many of these organizations were created, 2018 will bring their first wave of research, reports, and best practices. Most firms are now hiring staff, meaning “AI safety researcher” and “AI policy researcher” might be the hot jobs of 2018. Effective Altruism, a community focused on spending their careers in the highest-positive-impact roles possible, ranks AI safety as one of the most important roles today. Website 80,000 Hours, which helps people choose careers based on issues they’re passionate about, also ranks AI safety and policy highly.

And, naturally, 2018 will bring more AI research. Autonomous cars will get more precise, facial recognition will get spookier, but the question the technology raises will remain the same.