Rick Smith, the founder and CEO of Axon, was studying finance in graduate school when two men he’d played high school football with were fatally shot after a road rage incident in an Arizona parking lot.
It was, he says “a senseless tragedy and the most pivotal event of my life.” Smith’s first and only job since then has been to turn the Taser technology that was developed by a NASA scientist—and left to flounder by the US government—into a $3.9 billion market-cap company. Today thousands of law enforcement agencies around the world employ Axon’s Tasers, which shoot electrified probes into the human body from as far as 35 ft (10.7 m) away, painfully immobilizing their target.
The legacy Taser business is about to be eclipsed at Arizona-based Axon by body cameras, which have been adopted by 80% of the US’s big police agencies. The videos they collect are stored in Axon’s cloud facility, shared among agencies, and are being used to develop artificial intelligence-based technologies to aid police.
The company has an ethics board—comprised of privacy experts, police officials, and criminal justice activists—that it consults with on AI and other policing technology.
Smith’s new book, The End of Killing, which he describes as a “manifesto that says we should not accept that killing people is okay,” was released this month. It laces together humankind’s use of weaponry, the psychological impacts on soldiers and police when they kill, and a sci-fi vision of electronic surveillance to argue that America’s military and law enforcement should replace guns with more humane technology.
As the CEO of a company that sells some of the very products he thinks would help, Smith’s commercial interest in that future is obvious. But his book reflects a unique perspective formed over years of interacting with law enforcement, as well what he learned after his son entered the military. On police killings he writes:
I’ve never—not once—had a police officer who took a life talk to me about how great they felt after the fact. That just doesn’t happen. Even in instances where police officers are tried and found guilty of firing without cause, they’re devastated by the outcomes and remorseful about their actions. And they almost always believe they had no other choice in the thick of a tense situation.
But what they’re describing isn’t an affirmative case in favor of killing; they’re describing a problem that needs to be solved—a problem with the technology we give those officers and the situations in which it’s applied.
Quartz recently spoke to Smith and Richard Coleman, the company’s vice president for the federal sector, in Washington, DC, where they were meeting with officials at the Department of Homeland Security’s immigration-related agencies to push the merits of Axon’s products. Smith also answered follow-up questions by email; the interview has been edited for clarity and length.
Why did you write The End of Killing?
It was not to criticize the warriors that have to use lethal weapons today. It’s a failure of the community of suppliers that have not given them something better yet.
As I was doing research for the book, I was a little nervous about how police would react to this message. But the feedback has been, “You can’t do this overnight, and you can’t sell us snake oil, but if you can deliver something that outperforms a gun, we’ll put the guns away.”
I’m using this as a rallying cry to push my engineers harder. Today a Taser weapon is a very effective tool, but it is not as reliable as a gun, so cops will use a Taser today if it’s not a lethal force situation. If they’re going to a building and they don’t know what’s inside, they’ve got the gun in their hand. We need to get the cops something where they’re choosing the non-lethal weapon because it will do the job better, and if they make a mistake they can take it back.
One place I’m really trying to get to with the book is the military. Right now the military has virtually no non-lethal capabilities. When my son went to Afghanistan a couple years ago, basically they gave him an M-16 and that’s it. There are countless stories of US war fighters killing women and children because they didn’t have any other tool.
The next war is much more likely to look like Afghanistan than WWII. We have thousands of thermonuclear weapons, for goodness sakes we have enough lethality. Maybe we should start thinking about how to use AI and automation to not kill people.
Many of the homeland security agencies you want to sell tasers and body cameras to are already embroiled in controversy over how they are treating people. How does this technology solve that problem?
Municipal law enforcement was in the same position in 2014, after [the death of Michael Brown in] Ferguson, Missouri. What we’ve seen since the proliferation of body cameras is a 93% drop in complaints against police. Everybody has their unique opinion on that. Some think that the cops have stopped misbehaving, other people will tell you that the police were generally doing a good job all along, and this just now protects them.
Regardless, video is an impartial truth teller. It takes away lot of the baseless emotion on both sides of the debate in law enforcement.
You’re still going to have some bad situations, but at least you’ve got the facts of what happened. Otherwise you have a lot of hearsay and imagination. To this day no one knows what really happened in the Michael Brown incident, but people sure have a lot of strong emotion about what they think happened.
Can you walk us through exactly how US police departments use body cams?
A [police] agency will typically do a field trial first. Once they do a significantly sized field trial, every agency deploys it. There is no going back. We hear of prosecutors that won’t take cases now if there’s no video because they basically say, “A jury won’t believe us if we don’t have video in this day and age.”
All they need is an internet connection. We have 350,000 cameras live around the world right now, so bringing on another 10,000 is pretty straightforward for us. From there it is up to the agencies to set their policies on how and what they record. Most have a policy that says, “You need to record every official action with the public,” unless there’s a specific request, like an informant who isn’t comfortable being videotaped.
For an officer throughout the day, they’re turning it on and off, either manually or we have automatic triggers. For example, if you take your gun out of your holster, we have a sensor that makes sure your camera turns on.
When the officer wears it, it is in what is called a “buffering state.” We might be sitting here, and then someone runs a red light. We didn’t know that was going to happen, but you want to capture that. So when they start the recording it grabs the last 30 seconds of the pre-event buffer, and it will go until they turn it off, when it goes back to the buffering state.
Some police agencies are reportedly reconsidering Taser use after high-risk subjects were killed. And studies show officers are not using Tasers instead of guns, they’re using them instead of what might be a less-painful use of force. What’s Axon’s response?
The Taser devices are not yet suitable substitutes for lethal force. That’s the main point of the book, setting forth a roadmap to get there. Today, Taser weapons are used when a situation is escalating, and before it gets to the point where lethal force is needed. We have seen where some agencies institute more restrictive policies on the use of the Taser weapon following high-profile incidents. The unfortunate result can be that it reduces officer’s abilities to intercede with a Taser weapon before it escalates to lethal force.
CBP [US Customs and Border Protection] and ICE [US Immigration and Customs Enforcement] firearm use is relatively rare to begin with, so there doesn’t seem to be a huge demand for a “less lethal” weapon. Will putting Tasers in the hands of more DHS immigration officials increase the chances that asylum seekers are subject to painful tasing?
Physical struggles with unarmed individuals cause more injuries than confrontations with armed subjects. Hence, the use of the Taser is not as a substitute for a firearm, but as a safer means to gain control of a subject compared to other force options.
A study by the Police Executive Research Forum [a nonprofit of police executives that seeks to reduce the use of lethal force] found agencies that deploy Taser CEDs [conducted energy devices] see risks of officer injuries reduced by 70%, and suspect injuries are reduced by more than 40%.
In general, there are risks of abusive behavior by officers with any force instrument. The Taser weapon is the only instrument that has an in-built audit log that can be used to monitor behavior, and to prove or disprove allegations of overuse.
Finally, the best oversight tool to prevent abuse is to use body cameras to record how Taser weapons (and other force options) are being used.
Do you see pushback on body cams coming from the border patrol union, or the ICE union, who have long complained about how the agency is being managed?
What we experienced in local law enforcement was [that] the biggest resistance was unions. It’s a pretty big change, and if you feel you are being nitpicked and then you’re asked to wear a camera, a very human reaction is, “Now they’re going to really nitpick me.”
Sean Smoot, who was the head of the Illinois police union, described it in a meeting, and I think he nailed it. He said, “No cop wants to wear a camera, but after 90 days they won’t go on patrol without it.” After they’ve worn it for a while they realize, “This really protects me.” Police officers cannot lie; if they get caught lying it is catastrophic. Many of the people they are dealing with are criminal-type characters who will say falsehoods and make false claims. These officers were being accused of awful stuff. When they’ve got a camera, they say, “You can’t call my integrity into question any more.”
What sort of feedback have you gotten recently from the Department of Defense on the overall “end of killing” push?
I did present to the joint non-lethal weapons director. Last year there were rumblings of the Marine Corps divesting itself of the non-lethal weapons director, which might have meant the end of it. We’re in a non-lethal winter right now for the military. There’s not a lot of feeling of success that we should keep banging our heads against that wall.
I’m just trying to spur conversations. If we’re going to spend $1 trillion upgrading our weapons arsenal, we should at least think about putting some wood behind the arrow on giving my son something so that when he’s in Afghanistan he doesn’t have to make a decision to kill someone or let them get close enough that they could detonate a body armor explosive.
That is a solvable problem, but the first step is that we have to agree that it is worth doing, and then explore how we would do it.
The growth of body cams in law enforcement raises a lot of privacy concerns, particularly around people committing crimes or just being accused of them. How are you reassuring people about privacy?
We formed an AI ethics and privacy and advisory board about two years ago, when we first acquired one of these AI teams. There was a lot of initial reaction in the news media: “Axon is going to build the Orwellian surveillance state—”
Which exists in parts of China…
Yeah. I was actually just at a panel at NYU, with Barry Friedman, who runs the Policing Project, he is one of the more active members on our advisory board. Typically, police tech companies engage a lot more with police rather than some of these privacy groups. For me it has been really interesting to understand [what] their perspectives are.
We have an opportunity to do this the right way. There’s a lot of folks out there like Motorola and NEC who are already selling facial recognition to law enforcement. We have paused and said, “Let’s hold until we understand how we do this.”
I think facial recognition will be a fairly ubiquitous technology. The question is what sorts of controls and transparency should you have in place. We have another meeting of our AI board this spring. By mid-year we expect to be making some substantive announcements about how we will approach these things.
One of the biggest concerns is not that the technology is bad. It’s that if you haven’t thought through how it can be misused, you’re more likely to have random misuse cases.
How directly does the AI ethics board really shape your decisions?
We specifically set things up so this board does not vote to approve things. We said if [we] give this board veto power, we’re going to have to gerrymander our own board to make sure it’s not overrepresented [by privacy groups]. We sell to police, they have a really valuable voice. The ACLU and others also have a voice, but we have to come to a reasonable balance.
My commitment to them is, “We’re not going to deploy something unless we understand what your concerns are. We may end up not agreeing.” The stick will be, there will be a cost to me having an advisory board that is coming out publicly and saying they’re against some of the things we’re doing.
Once we make a product and go public, if they disagree with how we’re doing it, [they’re free to discuss it].
Can you give us a specific example of their impact on a company decision?
We were launching a [software] product so a police officer can text you a link and you can submit evidence, like a photo. We designed the system so that it does not collect personally identifiable information—you can submit information anonymously. One of the advisory board members said, “Wait a minute, are you scrubbing any of the data associated with those photos?”
Oh, good point, no we’re not. If you send a photo to someone it has your GPS, it has identifiers of your device. That got us paying much more attention that we were very clear with the public, telling them, “Be aware this can be traced back if a law enforcement agency puts an investigating unit on there.”