We wouldn’t trust a doctor employed by a tobacco company. We wouldn’t let the automobile industry set vehicle-emissions limits. We wouldn’t want an arms maker to write the rules of warfare. But right now, we are letting tech companies shape the ethical development of AI.
In an attempt to help shape the future of AI, in October 2017, DeepMind, the world-leading AI company acquired by Google in 2014, launched a new ethics board “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” Similarly, in 2016 the Partnership on AI to Benefit People and Society was formed by six major tech companies—Amazon, Apple, DeepMind, Google, Facebook, IBM, and Microsoft—“to study and formulate best practices on AI technologies.”
This might sound promising—but actually we ought to be worried. The private sector, who will most benefit financially from any decisions made, is taking the lead on AI ethics. But what we really need is outsiders looking in.
Why do we need regulation in the first place? AI raises new issues that current laws don’t cover. Right now it isn’t at all clear who should be held responsible if AI causes harm (such as in a self-driving car accident): the original designer, the owner, the operator, or perhaps even the AI itself. If we apply solutions on a case-by-case basis, we risk uncertainty and confusion. Failing to act also increases the likelihood of harmful, knee-jerk reactions fueled by public anger. We can avoid extreme outcomes if we sit down now to make the rules.
These issues aren’t just speculative thought experiments. AI is already being given ownership of difficult decisions that have until now rested on human intuition or principles—actions and doctrines that have been legally codified. These range from life-and-death questions, such as whether autonomous “killer robots” should be banned, to issues of economic and social importance, such as how to avoid algorithmic bias where AI chooses whether an applicant should be employed, or when a prisoner is granted parole. If a human were to make these decisions, they would be held to a legal or moral standard. No such rules exist in the wild west of AI.
AI regulation is currently dominated by corporate interests. We have seen this situation before. In 1954, the tobacco industry published the notorious “Frank Statement to Cigarette Smokers” in hundreds of US newspapers. “We are establishing a joint industry group consisting initially of the undersigned. This group will be known as [the] Tobacco Industry Research Committee. In charge of the research activities of the Committee will be a scientist of unimpeachable integrity and national repute. In addition there will be an Advisory Board of scientists disinterested in the cigarette industry. A group of distinguished men from medicine, science, and education will be invited to serve on this Board.”
Sounds familiar, right? Researchers have since linked the success of the tobacco industry’s campaign for self-regulation to millions of extra deaths from smoking and its side effects.
More recently, we only need to look at the 2008 global financial crisis to see what happens when industry self-regulation gets out of control. Though governments have now stepped in to require that banks hold better assets to back up their lending, the world economy is still suffering repercussions from the previously lax regime.
That’s not to say that progress is not being made. DeepMind has signed up prominent public commentators like AI philosopher Nick Bostrom and economist Jeffrey Sachs as members of its ethics board, and the Partnership’s coalition now includes not-for-profits like the American Civil Liberties Union, Human Rights Watch, and UNICEF. This still may not take them far enough, however. Though DeepMind notes it is prepared to hear “uncomfortable” criticism from its advisors, rules formulated by corporate ethics boards will always lack the legitimacy that a government can provide.
Governments are supposed to act for the common good of everyone in society. Corporations, on the other hand, are often legally required to maximize value for their owners. For example, the automobile-industry pioneer Henry Ford declared in 1916 that “My ambition is to employ still more men, to spread the benefits of this industrial system to the greatest possible number, to help them build up their lives and their homes.” In a controversial decision, the Michigan Supreme Court upheld a complaint against him, saying, “A business corporation is organized and carried on primarily for the profit of the stockholders.” The Ford case acts as a reminder that in many countries’ company law, directors are required to privilege making money over morality.
But governments are still trying to catch up to Silicon Valley when it comes to AI regulation; the longer they wait, the more difficult it will be to seize back the narrative from the tech companies. In the UK there is now a Committee of the House of Lords on AI, and the European Parliament has proposed the creation of new civil-law rules on robotics. But so far, these initiatives and are nowhere near developing common standards across the public and private sector.
It’s a difficult task, but not impossible. On a national level, governments already oversee lots of other complicated technologies, including nuclear power and banking. On an international level, the European Medicines Agency sets pharmaceutical standards for 28 countries, and the International Corporation for Assigned Names and Numbers regulates key parts of the entire internet.
It is important that a regulatory body’s ideas are codified into common law. If ethical standards are only voluntary, some tech companies will decide not to be bound by the rules that don’t benefit, giving some organizations advantages over others. For example, neither of the major Chinese AI companies, Tencent and Baidu, have announced that they will form ethics boards or joined the Partnership.
Without one unifying framework, too many private ethics boards could also lead to there being too many sets of rules. It would be chaotic and dangerous if every major company had its own code for AI, just as it would be if every private citizen could set his or her own legal statutes. Only governments have the power and mandate to secure a fair system that commands this kind of adherence across the board.
When writing rules for robots, corporate voices should therefore remain contributors and not law-makers. Tech companies may be well-placed to design rules because of their expertise in the area, but industry players are rarely in the best position to properly assess moral hazards.
History shows what can happen if governments sit back and let private companies set their own regulatory standards. Allowing this to occur for AI is not just lazy—it’s dangerous.