Anti-surveillance t-shirts don’t fool security cameras

This AI-evading t-shirt is still only 63% effective in the physical world.
This AI-evading t-shirt is still only 63% effective in the physical world.
Image: Photo by: Matthew Modoono/Northeastern University.
By
We may earn a commission from links on this page.

Can a simple outfit change render you invisible to security cameras? So-called “adversarial” t-shirts, which contain images that claim to fool the face-detecting algorithms used in surveillance systems, are having a moment.

A search for “adversarial t-shirt” on Google yields several options for both men and women from online clothing merchants like Redbubble and Teespring. A spike in protests over the past three years, including the pro-democracy protests in Hong Kong, has also lead to a rise in fashion that promises to evade AI-enabled security systems designed to identify people and moving objects.

Many of these t-shirts contain multi-colored patterns known as adversarial images, which can throw off deep learning algorithms that are trained to recognize familiar objects. Unfortunately for protestors and the companies hoping to cash in on them, the science behind such AI-evading techniques is far from exact. Even the latest advances in adversarial technology stand a high chance of getting detected anyway.

Scientists at Northeastern University and the MIT-IBM Watson AI Lab recently created an adversarial t-shirt with an image designed to fool person-recognition systems. In a pre-print paper, published in October, the researchers reveal that their t-shirt achieved a 57% success rate when used in the “physical world,” or when video footage of an actual person wearing the t-shirt is tested against surveillance systems. Success rates were slightly higher (74%) when researchers digitally added the adversarial image into the video footage.

In other words, even the most advanced adversarial t-shirt available today will fail roughly 43% of the time. Unsurprisingly, scientists are doubtful that the mass-market surveillance fashion available for purchase will actually do what they advertise.

“Ours have been optimized a lot and can only have less than a 60% success rate,” wrote assistant professor Xue Lin, one of the authors of the Northeastern study, when asked about the effectiveness of the t-shirts found on Redbubble and other online merchants.

In the study co-authored by Lin, researchers compiled 30 videos of five to 10 seconds each of a moving person wearing a checkerboard t-shirt. These videos were used as a training data set which they could base their adversarial pattern on. They then created 10 videos in the same setting of the training data set, but with a different person. They used this dataset to evaluate how effective their “learnt” adversarial pattern was in the digital world. Finally, they created a new t-shirt with the “learnt” adversarial image and filmed 10 test videos that captured a person wearing it. In this final instance, the t-shirt was proven to successfully bypass object-detection systems 63% of the time.

Creating an adversarial t-shirt is partially a challenge because people move. Researchers noted that a person’s movement can result in wrinkles in the pattern of the adversarial image. “This makes it challenging to develop adversarial t-shirts effectively in the real world,” noted the researchers in the paper. The researchers used a technique called Thin Plate Spline (TPS), which preserves points, straight lines, and planes in non-rigid objects (like clothing), making it possible to learn adversarial patterns from them.

Earlier this year, researchers from KU Leuven in Belgium created a cardboard sign that could fool the YOLO network (a system for object-detection) used in automated surveillance cameras. The design quickly went viral on YouTube and social media, likely boosting the creation of the aforementioned AI-evasion t-shirts. But the KU Leuven researchers only tested their design on a static surface, not a t-shirt.

Toon Goedemé, one of the researchers on the project in Leuven, told Quartz that the Northeastern study was interesting because they extended their “rigid” cardboard design to a more “flexible textile print.” The Northeastern researchers also found a way to evade multiple person detection software systems, such as YOLOv2 and Faster R-CNN.

Coming up with an adversarial pattern is easy when you’re familiar with the program’s internals. Goedemé said that such attacks can be derived to fool detection software from major companies, including those designed by Amazon’s AWS or Microsoft’s Azure. “So all neural networks inside AWS or Azure can be considered to find a working patch for it,” wrote Goedemé. In other words, scientists can theoretically design a bespoke adversarial image to fool any image-recognition system, though it likely won’t be 100% effective (especially when attached to a moving object, like a t-shirt).

But the designs capable of fooling major object-recognition systems aren’t the ones you’ll find in stores. “I don’t think any of these designs can fool current systems,” wrote Heikki Huttunen, an associate professor who specializes in machine learning and signal processing at the University of Tampere in Finland, in an email to Quartz. “But I think a properly designed t-shirt could make the life of a detection system harder. For now, I don’t think such designs are for sale.”

And even if a properly designed t-shirt does hit stores, it’ll only take a matter of days for the companies who design recognition systems to figure out how to thwart them. “This only requires collecting more data with people wearing adversarial shirts, manually pointing the people and retraining and deploying the model. All this should take a few days (at most),” noted Huttunen.

Such a scenario requires the makers of adversarial t-shirts to constantly design new patterns to fool the ever-evolving AI-recognition systems. Huttunen noted that both the Northeastern University study and a recent study by the University of Maryland and Facebook’s AI division both present algorithms for generating AI patterns, not the patterns themselves. “Thus, it would be possible for the attacker to design another t-shirt to fool the newly trained model, if it were accessible by the attacker,” wrote Huttunen.