As research teams at Google, Microsoft, Facebook, IBM, and even Amazon have broken new ground in artificial intelligence in recent years, Apple always seemed to be the odd man out. It was too closed off to meaningfully integrate AI into the company’s software—it wasn’t a part of the research community, and didn’t have developer tools available for others to bring AI to its systems.
That’s changing. Through a slew of updates and announcements today at its annual developer conference, Apple made it clear that the machine learning found everywhere else in Silicon Valley is foundational to its software as well, and it’s giving developers the power to use AI in their own iOS apps as well.
The biggest news today for developers looking to build AI into their iOS apps was barely mentioned on stage. It’s a new set of machine learning models and application protocol interfaces (APIs) built by Apple, called Core ML. Developers can use these tools to build image recognition into their photo apps, or have a chatbot understand what you’re telling it with natural language processing. Apple has initially released four of these models for image recognition, as well as an API for both computer vision and natural language processing. These tools run locally on the user’s device, meaning data stays private and never needs to process on the cloud. This idea isn’t new—even data hoarders like Google have realized the value of letting users keep and process data on their own devices.
Apple also made it easy for AI developers to bring their own flavors of AI to Apple devices. Certain kinds of deep neural networks can be converted directly into Core ML. Apple now supports Caffe, an open-source software developed by the University of California-Berkeley for building and training neural networks, and Keras, a tool to make that process easier. It notably doesn’t support TensorFlow, Google’s open-source AI framework, which is by far the largest in the AI community. However, there’s a loophole so creators can build their own converters. (I personally expect a TensorFlor converter in a matter of days, not weeks.)
Some of the pre-trained machine learning models that Apple offers are open-sourced Google code, primarily for image recognition.
Apple made it clear in the keynote today that every action taken on the phone is logged and analyzed by a symphony of machine-learning algorithms in the operating system, whether it’s predicting when you want to make a calendar appointment, call a friend, or make a better Live Photo.
The switch to machine learning can be seen in the voice of Siri. Rather than using the standard, pre-recorded answers that Apple has always relied on, Siri’s voice is now entirely generated by AI. It allows for more flexibility (four different kinds of inflection were demonstrated on stage), and, as the technology advances, it will sound exactly like a human anyway. (Apple’s competitors are not far off.)
Apple also rattled off a number of other little tweaks powered by ML, like the iPad distinguishing your palm from the tip of an Apple Pencil, or dynamically extending the battery life of the device by understanding which apps need to consume power.
Okay, so Apple’s really only published one paper. But it was a good one! And Ruslan Salakhutdinov, Apple’s new director of AI research, has been on the speaking circuit. He recently spoke at Nvidia’s GPU Technology Conference (although Apple’s latest computers use AMD chips), and will be speaking later this month in New York City, to name a few.
Apple also held a closed-door meeting with their competitors at a major AI conference late last year, shortly after Salakhutdinov was hired, to explain what it was working on in its labs. Quartz obtained some of those slides and published them here.
Is Apple a leader in AI research? Not according to most metrics. But many consider open research to be a way of recruiting top talent in AI, so we might see more papers and talks in the future.