Skip to navigationSkip to content

The “third revolution in warfare” is weapons that can decide to kill on their own

Killer robots must be stopped, says Pax
AP Photo/Pavel Golovkin
One of our future robot overlords?
  • Justin Rohrlich
By Justin Rohrlich

Geopolitics reporter

Published This article is more than 2 years old.

If there’s one thing we’ve learned in recent years, it’s that humans aren’t great at predicting the consequences of technology. After all, social media platforms, which began as a way for friends to connect online, are today being used to radicalize terrorists and potentially swing presidential elections.

Imagine, then, the chaos that could ensue with new technologies that don’t even pretend to be friendly. The advent of lethal autonomous weapons—“killer robots” to detractors—has many analysts alarmed. Equipped with artificial intelligence, some of these weapons could, without proximate human control, select and eliminate targets with a speed and efficiency soldiers can’t possibly match.

Seven nations are known to be pursuing such weapons: the US, China, Russia, Britain, France, Israel, and South Korea. Projects include AI-equipped tanks, fighter jets, and machine guns.

Self-imposed guidelines exist, but experts say they’re insufficient. US military policy mandates “appropriate levels” of human judgment when making firing decisions, but doesn’t define that and allows for exceptions. The US is also among a handful of nations standing in the way of international regulations in this arena. China says it supports a ban on the use but not the development of the weapons.

Yet nothing short of a complete ban is required to prevent eventual disaster, says a new report from Pax, a Dutch anti-war NGO that fears an arms race breaking out.

The fear is merited. AI experts consider the weapons to be the “third revolution in warfare” (pdf). As with the first two, gunpowder and nuclear bombs, such systems could quickly prove their worth, giving the side that possesses them a nearly insurmountable advantage.

Without a ban, AI weapons could become established in militaries around the world. And as with social media, trying to apply regulations retroactively would prove difficult, with fierce resistance from the companies involved—and the technology racing ahead in the meantime.

As Pax states, “It would be deeply unethical to delegate the decision over life and death to a machine or algorithms.”

The question is, will we act in time? Speed, as with AI weapons, is of the essence.

This essay was originally published in the weekend edition of the Quartz Daily Brief newsletter. Sign up for it here.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.