The UK has a shot at leading the world in artificial intelligence and robotics governance, if not for Brexit. Britain’s impending exit from the EU has cast doubts over crucial legal provisions for AI and robots, according to the results of an inquiry by the British parliament published today (Oct. 12).
To date, companies that stand to benefit from AI developments have so far been the ones leading the development of ethical guidelines around AI and robotics. Governments have been left behind, although the White House has also released its own, long awaited report on AI’s impact today.
The Science and Technology Committee of the House of Commons, the British parliament’s lower chamber, published its report after six months of gathering evidence from academics, companies like Google DeepMind and Microsoft, and experts on AI and robotics in general. It came up with laudably common-sensical recommendations:
- The government should refrain from too much regulation because the industry is growing and changing rapidly
- An independent commission should be funded by the government to develop governance principles and ethical guidelines to ensure AI is “socially beneficial”
- The government is tardy in saying how it’s going to deal with a looming “digital skills crisis” that the committee previously highlighted, and jobs could be at stake if AI develops without a plan
The official guidance is welcome, as the UK has established itself as a hotbed for AI companies in Europe. These include DeepMind, acquired by Google for $500 million in 2014, and companies like SwiftKey and Magic Pony, all acquired by larger Silicon Valley firms seeking an AI edge.
An important aspect of devising ethical rules for AI development is the notion of algorithmic accountability, or “decision-making transparency,” as the report calls it. For instance, the report points out, when AlphaGo played a strange move, observers assumed it had malfunctioned, when it fact it had devised a novel strategy for the board game. The problem was that while AlphaGo could play the move, it couldn’t explain why it did so—and no one else could, either. “When the stakes are low—such as in a board game like Go—this lack of transparency does not matter,” the authors write.
But one academic, Tony Prescott of the University of Sheffield, who gave evidence to the committee, noted that algorithms will likely replace human decision-making in more important areas, including in finance and medicine. The solution, then, is some form of transparency in the algorithmic construction process. A Microsoft researcher called for transparent “building-blocks” for algorithms that humans can comprehend. Alan Winfield of the Bristol Robotics Laboratory says “inspection” of algorithms in the wake of a disastrously wrong decision would be important.
It turns out that such provisions are already enshrined in law. The EU’s sweeping data-privacy rules, the General Data Protection Regulation (GDPR), which come into force in 2018, already have a clause that’s been called the “right to explanation.” Article 22 of the rules deals with “automated individual decision-making,” and it says that a person has a right not to be subject to a decision “based solely on automated processing.” Academics at the Oxford Internet Institute have argued (pdf) that this amounts to a “right to explanation,” although as with all laws, the clause could ultimately be gamed by tech companies to side-step providing meaningful transparency, as Fusion has pointed out.
It looks like the UK has all the ingredients in place for a successful AI governance framework—that is, unless the GDPR doesn’t apply in post-Brexit Britain. Like all the other EU rules that currently apply in the UK, how closely British law will hew to things like the GDPR is an open question once the UK leaves the bloc. As the report points out: “Whether, and how, [the “right to explanation”] will be transposed into UK law is unclear following the EU referendum.”