World leaders’ attempts to forge global AI regulations have been half-hearted and halting.
In 2018, Canada and France spearheaded an effort to form a regulatory body for AI, backed by the G7—but the US spiked the effort, arguing that it would crimp American innovation. Instead, the OECD and the G20 adopted a set of AI principles the following year, while the EU and the World Economic Forum each came up with their own.
“They don’t really have teeth, and they’re also very fragmented,” said Marietje Schaake, the international policy director for Stanford’s Cyber Policy Center and former member of the European Parliament representing the Netherlands. To head off AI-enabled human rights abuses (and the chaos of conflicting regulations), global leaders could turn to another set of rules governing a powerful new technology: the 1967 Outer Space Treaty.
In a blog post, DeepMind researcher and Cambridge fellow Verity Harding argued that the Cold War space agreement offers a roadmap for international cooperation “achieved at a time at least as unsafe and complicated as today’s world—if not considerably more so.”
In 1967, as the US and the Soviet Union sprinted to develop their spacefaring capabilities, concern grew that world powers might use space as a staging ground for weapons of mass destruction. The space accords sought to keep nukes out of orbit and to establish that celestial bodies couldn’t be colonized or used for military purposes.
The text of the treaty is instructive. Where AI ethics statements are mushy—the OECD blandly declares that “AI should benefit people and the planet”—the space accords are firm. The 1967 treaty states in no uncertain terms, for example, that “the establishment of military bases, installations and fortifications, the testing of any type of weapons and the conduct of military maneuvers on celestial bodies shall be forbidden.”
The document is also rather short, limited to the areas where global leaders could find broad agreement. Harding argues this was a shrewd approach to getting a deal done in time to have a real impact: “Not letting the best be the enemy of the good meant that by the time man landed on the moon we had a global political framework as a foundation on which to build.” (Harding did not respond to requests for an interview.)
But it also meant global leaders had to scramble to fill in the gaps later. Zia Khan, who heads the Rockefeller Foundation’s work on technology and innovation, says these sorts of limitations are inevitable in any treaty. “If we just try to come up with rules, they would probably not be correct, or get out of date, or be mostly right but we’d have no way to tweak them,” he said.
Khan argues that, in addition to a first pass at international law, global leaders must also create a rule-making body that can adjust regulations as the world changes.
And there’s one key difference between rocket ships in the 1960s and AI today, Khan points out: Algorithms are already ubiquitous, and businesses are increasingly using them to automate operations. “We need people to see this as important,” he said. “If we don’t get our arms around AI now, we’ll end up where we are with the climate because we didn’t think hard enough about how we use oil and energy.”