Automated systems powered by artificial intelligence are beginning to act as gatekeepers on the internet. Facebook uses such algorithms to prevent spam and block content deemed inappropriate for the platform, as does Twitter. YouTube uses them to fight copyright infringement.
But these algorithms aren’t perfect. Take Google’s new algorithm to automatically detect what’s in a video. Researchers at the University of Washington found that by inserting still images into a video, they could trick the algorithm into thinking it was about a completely different topic.
In their paper, the researchers suggest this would be a backdoor for malicious users to get illegal or otherwise inappropriate material on YouTube, without the company’s algorithm being able to catch it. The same team previously found that Google and Jigsaw’s hate-fighting algorithm could be circumvented by spaces and punctuation. YouTube, however, currently uses a separate set of algorithms focused on catching abusive or illegal content, and it’s unknown whether the attack could be used in that case as well.
The University of Washington team was able to identify this vulnerability because Google lets developers pay to use the software on their own websites and applications, as an application programming interface (API). In one test using this API, the researchers scanned a video about animals, which returned “animal,” “wildlife,” “zoo,” “terrestrial animal,” “nature,” and “tourism.” But when the researchers inserted a picture of an Audi car for one video frame every second, only slightly noticeable to a viewer, the algorithm returned “Audi” as the highest-scoring tag, followed by “vehicle,” “car,” “motor vehicle,” and “Audi A4.”
Researchers were able to replicate the attack three more times with pictures of a building, a plate of noodles, and a laptop.
However, the UW team didn’t figure out why its attack works, only that it was effective.
This research comes at a time when Google has come under fire after media outlets such as the Wall Street Journal have reported that advertisements from large brands including Disney and Coca-Cola are being played against racist and objectionable content on YouTube.
Now, Google is employing more AI as part of the solution, the company’s chief business officer Philipp Schindler told Bloomberg.
“We switched to a completely new generation of our latest and greatest machine-learning models,” Schindler said. “We had not deployed it to this problem, because it was a tiny, tiny problem. We have limited resources.”
Google did not respond to a request for comment.
Correction: An earlier version of this article stated YouTube used the algorithms behind the Video Intelligence API, according to a Google Cloud blog post. YouTube uses a separate set of algorithms.