Weapons built by defense manufacturers that can think for themselves are getting smarter, which mean the much-feared killer robot could be a reality sooner than later. That’s the warning contained in a new report from Pax, a nonprofit based in the Netherlands that campaigns for peace around the world.
Killer robots, or lethal autonomous weapons systems, are designed to make life-or-death decisions on their own, without human control. It’s a worrying leap that’s been called the “third revolution in warfare,” after gunpowder and the atomic bomb. Both activists and military leaders have called for international regulations to govern these weapons, or even ban them outright, but key governments—like the United States and Russia—have so far resisted.
As far as anyone knows, militaries have yet to actually deploy killer robots on the battlefield, at least offensively. But Pax has identified at least 30 global arms manufacturers that don’t have policies against developing these kinds of weapons systems, and are reportedly doing so at a rate that is outpacing regulation.
The companies include US defense firms Lockheed Martin, Boeing, and Raytheon, the Chinese state-owned conglomerates AVIC and CASC, Israeli firms IAI, Elbit, and Rafael, Rostec of Russia, and Turkey’s STM.
“As long as states haven’t agreed to collectively come up with some kind of regulatory regime, or ideally, a preemptive ban, the fear is very real that companies will be crossing this plane and will develop and produce and eventually field weapons that lack sufficient human control,” the report’s author, Frank Slijper, told Quartz.
Activists don’t believe that military use of some degree of artificial intelligence is problematic in it itself. The US military is already employing full autonomy in some of its defensive weapons platforms, like the US Navy’s Aegis shipboard missile defense system, which is designed to intercept enemy fire on its own. The US Army is developing an AI-capable cannon, which would select and engage targets on its own, as well as AI-assisted tanks that, as Quartz first reported, will be able to “acquire, identify, and engage targets” at least three times faster than any human. But these systems still all require a person to pull the trigger, so to speak.
PAX is more concerned about the potential deployment of AI in offensive systems that would select and attack targets on their own without human oversight. The group questions how these weapons would distinguish between combatants and civilians, or judge proportional responses. Legal experts still don’t know who would be held responsible if an autonomous weapon broke international law. And without lives on the line, these weapons could make it easier to go to war, and for those wars to escalate more quickly.
The report warns that such weapons would “violate fundamental legal and ethical principles and would destabilize international peace and security.”
What they’re building
Defense firms don’t produce weapons in a vacuum, Slijper said. Instead, he said, these weapons are developed because companies believe that’s what militaries want in their arsenals.
And unlike Google or Amazon, which have both faced public and internal backlash for their work on military systems, companies like Lockheed Martin and Raytheon do almost all of their business with militaries, so they face little risk from the negative reaction of consumers.
For its report, Pax sent questionnaires to 50 arms manufacturers that produce military systems, asking each if it had policies regarding autonomous weapons. Just eight firms said they had in place principals guiding their AI work. The rest did not reply.
Here’s what they told Pax:
|BAE Systems||UK||Policy supports “our customers’ view that there needs to be human input over the use of force” and “we believe that the use of autonomous systems does not mean a loss of command or the abdication of responsibility for decisions.”|
|Leonardo||Italy||“The use of autonomous systems in safety-critical contexts must be subject to supervision and human control. […] Committed to respect of core principles of [International Humanitarian Law].”|
|Milrem||Estonia||“Human control should always be maintained over all defence systems, including weapon systems…We always choose partners who share and adhere to the same values and positions we do.”|
|Northrop Grumman||US||“Not developing weapon systems that can autonomously select and attack targets without meaningful human control…company policies, practices and procedures reflect a strong commitment to human rights as set forth in the Universal Declaration of Human Rights.”|
|QinetiQ||UK||“Policy prohibits the development of any system capable of firing a weapon without human intervention.”|
|ST Engineering||Singapore||“Complies fully with all Singapore laws and regulations on manufacturing of military products. Beyond Singapore, we also observe all UN sanctions and abide to all treaty obligations to which Singapore is a signatory.”|
|Thales||France||Working on “TrUE AI, an AI that is Transparent, Understandable and Ethical, where humans always remain in control.”|
|Volvo||Sweden||“Activities have no link with research on lethal autonomous weapons. Policy “has always been that a weapon should be at all times under meaningful human control, and that under no circumstance a weapon could autonomously open fire.”|
Of the weapons that exist now, Slijper said he is particularly worried about “loitering munitions.” Pax describes these as hybrids between drones and guided missiles, which can “loiter” in the air for two hours or more before attacking their targets. Small, cheap and relatively easy to produce, the number of companies developing these weapons has grown considerably in the last 10 years, Slijper said. With so much availability, it’s only a matter of time before they are deployed in a large scale by both state and non-state actors alike.
The Pax report singled out two companies that are now manufacturing such weapons:
- STM, a Turkish state-owned defense company, produces an AI-equipped loitering munition called KARGU. Complete with facial recognition capabilities, KARGU can autonomously select and attack targets using coordinates pre-selected by an operator. Turkey is reportedly set to use these “kamikaze drones” in Syria.
- The Harpy, a “fire and forget” loitering munition manufactured by state-owned Israel Aerospace Industries, has a range of 62 miles and can stay aloft for two hours. IAI states that the system “loiters in the air waiting for the target to appear and then attacks and destroys the hostile threat within seconds.”
While development of autonomous weapons continues apace, Pax believes there is still time to head off eventual catastrophe. The group said companies can play a crucial role in this, and should first make a public pledge against the manufacture of fully autonomous lethal weapons. As far as AI-assisted weapons systems go, Pax believes defense firms must “establish a clear corporate policy with implementation measures” that include:
- Ensuring each new project is assessed by an ethics committee;
- Ensuring the principle of meaningful human control is an integral part of the design and development of weapon systems;
- Adding a clause in contracts, especially in collaborations with ministries of defense and arms producers, stating that the technology developed may not be used in lethal autonomous weapon systems;
- Ensure employees are well informed about what they work on and allow open discussions on any related concerns.
Aside from a German arms industry association, which called for a ban on fully autonomous weapons systems earlier this year, most companies have not committed to any regulations, according to Pax.
It is important for nations to immediately take “bold steps to stop lethal autonomous weapons from becoming reality,” the report says. Yet, while Australia, Brazil, Chile, and Peru have been outspoken in their opposition to fully autonomous weapons, the US and Russia have so far stymied any attempts to pass a unified international treaty.
“Also countries such as Pakistan, Egypt, and Iraq have been supporters of a ban treaty,” Slijper said. “Probably quite understandable that some of the countries that over the past two decades have experienced drone warfare are probably anxious for what the future might bring to them.”