The research, whipped up in the five days since the Illinois paper was published, shows a printed picture of a kitten fooling image-recognition AI into thinking it’s a picture of a “monitor” or “desktop computer” from a number of angles, and as the picture gets closer and farther.

“When this paper claimed a simple fix, we were curious to reproduce the results for ourselves (and we did, in a sense). Then, we were curious if a determined attacker could break it, and we found that it was possible,” OpenAI researcher Anish Athalye told Quartz.

The rapid response to the Illinois paper illustrates two ideas: 1) never say never in AI research, 2) there will be a bumpy road ahead for autonomous vehicles. If fellow researchers can retaliate in a matter of days, autonomous automakers will find themselves locked in a daily game of cat-and-mouse against a slew of decentralized and constantly-evolving attacks in the real world.

Automakers might also have much simpler problems to fix before they can tackle adversarial examples. It’s entirely possible that a black marker and some poster board might be just as effective as a maliciously-crafted machine-learning attack—a Carnegie Mellon professor has documented how his Tesla mistook a highway junction sign as a 105 mile-per-hour speed limit.

David Forsyth, the Illinois professor who co-authored the paper, explained that what he worked on examined a small question—if traditional straightforward adversarial examples would work on self-driving cars— but was just the beginning of determining how adversarial examples worked in the wild.

“The correct conclusion is ‘more research is needed,'” Forsyth said.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.