Somewhere within Apple’s tech to monitor child sexual abuse is a privacy nightmare waiting to happen.
In August, the Cupertino company shared plans to use “client-side scanning” (CSS) to detect child sexual abuse material by scanning images before they’re uploaded to its iCloud Photos storage service. It had the potential to be installed on more than a billion Apple devices globally.
But come September, it went back to the drawing board after massive public outcry.
Yesterday (Oct. 14), over a dozen cybersecurity experts from Harvard, MIT, Cambridge, and other esteemed institutions, joined the dissenters’ chorus. In their 46-page study, they poked holes in the technology’s efficacy, arguing that it poses “serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic.”
Scanning iCloud photos for illegal activity is a slippery slope
“Plainly put, it is a dangerous technology. Even if deployed initially to scan for child sex-abuse material, content that is clearly illegal, there would be enormous pressure to expand its scope. We would then be hard-pressed to find any way to resist its expansion or to control abuse of the system,” the researchers said.
The analysis, which began before the announcement to halt the tech’s deployment, is crucial because Apple has not scrapped the idea entirely. Moreover, it’s not just Apple toeing this thin line between public security and consumer privacy. Documents released by the European Union suggest authorities there have similar plans not just for child sexual abuse, but also terrorism and organized crime.
“Such bulk surveillance can result in a significant chilling effect on freedom of speech and, indeed, on democracy itself,” the paper states.
Apple’s defense of scanning photos scanning for child sexual abuse
Initially, Apple spent some time allaying fears, sharing detailed explanations and interviews with company executives.
For one, the images are being checked against a database that includes photos flagged in at least two nations, Apple reassured users. Then, the company said it will only flag an iCloud account that has at least 30 problematic images. And it said it was open to changing that threshold once the tech is out in the real world.
But an open letter from over 90 policy advocates, an Electronic Frontier Foundation (EFF) petition with 25,000 customer signatures, and hundreds of internal Slack chats from worried Apple employees later, the company paused its plans. On Sept. 23, it delayed the rollout, buying time to “make improvements.”
Some opponents want the company to abandon the plan altogether. For instance, after all of Apple’s communication, the EFF still said, “even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.”
Will Cathcart, the head of Facebook‘s messaging service WhatsApp, was opposed to the system.
Apple customers’ privacy vs strong-arm governments
Apple has stressed that it would refuse demands by authoritarian governments to expand the image-detection system beyond this initial purpose, but there is no guaranteeing that.
“It could stop enforcing this policy locally or globally, whether by a company decision or under pressure from states wishing to maintain sovereignty within their borders,” the researchers wrote.
This is a legitimate concern since Apple’s track record isn’t great when it comes to resisting such pressures. A few years ago, it ceded ownership of the iCloud data of its Chinese users, moving it to data centers under the control of a Chinese state-owned company. Last month, it removed jailed Kremlin critic Alexei Navalny’s namesake tactical voting app “Navalny” from its Russian app store.
What’s more is that the technology isn’t even robust. People have already detailed ways to beat the system by altering the images. “Early tests show that it can tolerate image resizing and compression, but not cropping or rotations,” developer Asuhariet Ygvar, who claims to have reverse-engineered the NeuralHash algorithm used in Apple‘s detection system, wrote on Reddit. And the tech does not scan videos—a widely-used format among offenders.