A legal question for the AI age: Is tricking a robot the same thing as hacking it?

You see a turtle, your computer sees a rifle.
You see a turtle, your computer sees a rifle.
Image: Anish Athalye
We may earn a commission from links on this page.

A team of computer scientists and a lawyer at University of Washington are raising a curious question: Do current US laws cover cutting-edge research that allows people to bend AI to their will?

The research, called adversarial machine learning, takes advantage of the way AI looks at the world, tricking the algorithm to make a different decision than it was designed to make. For example, an attacker might trick AI into perceiving a stop sign as a speed limit sign, or poison an automated credit-rating system in order to get a cheaper loan.

The issue could affect every tech company using AI today: If this kind of intervention constitutes hacking, are companies now legally required to protect their systems from adversarial machine learning as they do typical hacking? And if this is not hacking under the legal definition, who’s responsible if an attacker crashes someone else’s car by tricking its AI?

The researchers focused on the United States law broadly used when dealing with hacking, the 1986 Computer Fraud and Abuse Act. The law is now more than 30 years old, originally triggered by lawmaker’s fear of the movie War Games. But it’s still viewed as applicable to modern hacking by many lawyers.

Their paper considers a broad view of the legal definition of hacking under the CFAA, applying it even to previous work done by one of the co-authors, Yoshi Kohno. Kohno’s research, which coded DNA to hack a DNA sequencing machine could be viewed as hacking under the law, says co-author Ryan Calo.

“[DNA hacking] still meets the idea of hacking that dates back to the 1980s, but this stuff about compromising a system by tricking it seems like a paradigm shift,” Calo says.

Here are two potential cases that the researchers lay out:

Self-driving cars:

An engineer extensively tests the detector used by the driverless cars company where she works. She reports to the founder that she’s found a way to knowingly deface a stop sign to trick the car into into accelerating instead of stopping. The founder suspends operations of his own fleet but defaces stop signs near his competitor’s driverless car plant. A person is injured when a competitor driverless car misses a stop sign and collides with another vehicle.

Adversarial theft

An individual steals from a grocery store equipped with facial recognition cameras. In order to reduce the likelihood of detection, the individual wears makeup she understands will make her look like another person entirely to the machine learning model. However, she looks like herself to other shoppers and to grocery store staff.

The authors write that a case could be made for either classification (meaning hacking or not hacking) in court. Either way, the debate rests on what counts as a “transmission” and what counts as “unauthorized access,” two terms in the CFAA that define hacking. Does a sticker that tricks AI count as a transmission, like breaking into another computer system with malicious code? Does it count as unauthorized access to play a sound into a microphone, or show a picture to an image recognition algorithm?

Computer scientists that Quartz has previously interviewed on the subject have been resistant to calling these interventions “hacking,” because adversarial machine learning often doesn’t require overcoming some kind of defense in the code and gain access to the computer.

“Each time we tried to put adversarial machine learning into a bucket, we came up with these ‘one the one hand, on the other hand’ situations.” Calo said. ”Our ultimate conclusion is not that it’s surely going to count or surely not, but rather that there’s sufficient ambiguity that’s creating problems unless that ambiguity is resolved.”