
The artificial intelligence arms race is here.
Countries around the world are spending millions to introduce the latest artificial intelligence technology into their military operations.
Artificial intelligence can automate some operations in the military and save crucial time by speeding up certain aspects of strategic decision making under human supervision — like pinpointing targets and coming up with courses of action.
But for all of its merits, the technology is also equally worrisome to some experts.
“We all probably suffer from automation bias, which is this idea that we are tempted to and often will accept the recommendation, for example, that a large language model spits out, or prediction that one of these systems is making, because we feel as though the system must have more information than we do, and must be processing it and sequencing it and ordering it better than we could,” legal scholar and former associate White House counsel Ashley Deeks told Quartz earlier this month.
What exacerbates the problem even more is that AI systems are like “black boxes,” according to Deeks, in that it is tough for users to understand how or why it reaches certain conclusions. This could make it even tougher for officers to figure out who to trust when their gut and experience, and the AI system are saying opposite things.