Apple CEO Tim Cook doesn’t think the future of artificial intelligence will infringe on users’ privacy, the way it does now. Responding to a question about Apple’s virtual assistant Siri on a call to discuss the company’s latest quarterly earnings, Cook said:
“In terms of the balance of privacy and AI, this is a long conversation, but at a high level, this is a false tradeoff. People would like you to believe you have to give up privacy to have AI do something for you, but we don’t buy that. It might take more work, it might take more thinking, but I don’t think we should throw our privacy away. It’s sort of like the age old argument between privacy and security. You should have both. You shouldn’t have to make a choice.”
Unfortunately, that’s not the way things currently work.
Artificial intelligence is in the middle of a bit of a privacy crisis. To get all the AI-powered features from your smartphone promised by companies like Apple and Google, such as automatically organized photos and personalized recommendations, all the information collected and used can’t be encrypted. For instance, Google’s new Allo messaging app requires encryption to be turned off while using Google Assistant. If the files were encrypted, current AI would have to decrypt the files to learn from them, which would be slow and consume both processing and battery power on a smartphone.
Some research is being done to speed up the process, but even the best efforts seem far from production-ready. Security researchers at Google and OpenAI recently published a new technique to obscure private data to keep it safe, rather than encrypt it, and the team tells Quartz that they’re currently investigating how to make encryption work with their system.
Apple doesn’t publish any of its research, so it’s tough to gauge its progress. But from Cook’s statement, it doesn’t seem like Apple has cracked the code yet either.