The global competition to develop fully autonomous weapons systems guided by artificial intelligence risks developing into a full-blown arms race, according to a new report from a Dutch peace group.
Lethal autonomous weapons, or “killer robots,” as they are described by Pax, the anti-war NGO behind the report, are designed to select and engage targets without proximate human control. Their advent has been called the “third revolution in warfare” by AI experts—a successor to the invention of gunpowder and the creation of nuclear bombs.
Seven countries are known to be developing lethal autonomous weapons: the US, China, Russia, the UK, France, Israel, and South Korea. Right now, US military policy mandates some level of human interaction when actually making the decision to fire. The other countries maintain they support a ban on fully autonomous lethal weapons. China, however, supports a ban on their use but not on their development.
“Lethal autonomous weapons raise many legal, ethical and security concerns,” the Pax report says. “It would be deeply unethical to delegate the decision over life and death to a machine or algorithms.” Machines on their own are “unlikely to comply” with the laws of war or have the ability to distinguish between civilians and combatants. Pax also foresees an “accountability vacuum” after improper or illegal acts.
No one yet knows just how fully autonomous lethal weapons used by opposing militaries, with algorithms making life-or-death decisions, will interact in real-world situations. Pax is calling on the international community to “define clear boundaries in new international law to prevent the development of killer robots.”
A preemptive ban backed up by international law should be initiated by national governments, the report urges. The tech industry and individual engineers and scientists also have a responsibility not to participate in developing such weapons, Daan Kayser, the Pax report’s lead author, tells Quartz.
What the US is developing
The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has announced it would invest $2 billion in the development of the “next wave” of AI. As Quartz reported in February, the US Army is developing an AI-powered tank that “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.” This will allow weapons to “acquire, identify, and engage targets at least 3x faster than the current manual process.”
The Advanced Targeting and Lethality Automated System (ATLAS), which will not replace soldiers with machines but seeks to augment their abilities, is primarily designed to increase the amount of response time tank gunners get in combat, Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, a bipartisan think tank in Washington, DC, told Quartz.
However, Stuart Russell, professor of computer science at UC Berkeley and a highly regarded AI expert, said he was deeply concerned about the idea of land-based fighting vehicles eventually having the capability to fire on their own. The existing ban on full lethal autonomy will be dropped “as soon as it’s politically convenient to do so,” Russell contends. Concerned about criticism, the Army subsequently changed the way it describes the ATLAS program.
The global array of products
Israel Aerospace Industries (IAI) recently introduced the Mini Harpy “loitering munition,” a drone that hovers over a combat zone, autonomously detects targets and then “locks in on the threat and attacks it for a quick, lethal closure.”
The Russian Ministry of Defense in 2018 opened a military-tech incubator of sorts called the Era technopolis, focused solely on the “creation of military artificial intelligence systems and supporting technologies.”
And China is already two years into its “Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry.” President Xi Jinping has said he believes AI will be a crucial part of the country’s military prowess moving forward.
Private-sector support is “vital” to AI projects, the Pax report states, and Kayser notes that Google instituted broad ethical guidelines following employee opposition to its work on the Pentagon’s Project Maven, a system that uses AI to select targets for drone strikes. Thales, the French defense and aerospace contractor, has committed to a ban on autonomous weaponry.
Private companies will go as far as the law allows, which includes “coming up with concepts and products that they think they might be able to sell,” says Cindy Otis, a former CIA military analyst.
“If they think there’s interest from governments, if there’s no regulation on them, they’ll do it,” Otis tells Quartz. “And defense contractors of course understand that war is increasingly automated. Weapons are increasingly meant to be ‘smart,’ and so they’re going to keep moving in that direction. Government has a responsibility in making sure there are regulations in place and countries putting into law what they’re willing to accept and what they’re not.”
To date, 28 countries have called for a full ban on autonomous lethal weapons. Kayser says although UN talks have been ongoing since 2014, the imminent threat requires a far more efficient process.
“This diplomatic process is slow and that’s one of the things we find [most] concerning,” he says. “We do see a growing group of states and people who see the need to set up clear international norms, but our concern is that this is not moving fast enough.”
Read the full text of the Pax report here: