In This Story
When asked about the biggest benefits of Apple’s newly announced set of AI tools — aptly named Apple Intelligence — CEO Tim Cook’s answer is simple: privacy.
“The idea that it’s private, I think, is a very big idea in today’s world. People want to know in some kind of way that [AI] is personal to them, but also private,” the Apple chief told the Washington Post in an interview during the company’s annual Worldwide Developers Conference Monday. “And these two things generally haven’t gone together very well. We found a way to thread the needle.”
But therein lies the question: Why would Apple partner with an AI company widely criticized for breaching consumers’ privacy to scrape their data? Cook didn’t address ChatGPT’s data privacy concerns head-on, but he told the Post that OpenAI has “done some things on privacy that I like,” adding that “[t]hey’re not tracking IP addresses and some of the things like that that we’re very keen on not happening.”
Luckily, Apple has privacy guardrails of its own. Unlike other AI tools, its large language models will typically work on users’ devices rather than utilizing server software. And Apple also launched “Private Cloud Computing,” its way of ensuring AI queries that do run on server software are protected and data is never stored by Apple. Also, users have the ability to turn off ChatGPT.
One thing Cook seemed less keen to take an absolutist approach toward: AI hallucinations. The CEO said he’s “not 100 percent” certain that Apple Intelligence won’t hallucinate.
“But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in,” Cook continued. “So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”