The next big step for AI is getting out of the cloud and onto your phone

The neural network goes right…there.
The neural network goes right…there.
Image: Reuters/Dado Ruvic
We may earn a commission from links on this page.

You want AI on your phone. It’s faster, more secure, you can use it regardless of the availability of cell service or wi-fi, and perhaps just as importantly, you can look down in your hand at that five-inch device wrought from sand and silicon with the full knowledge that dozens, if not hundreds, of virtual machine brains are making decisions inside just for you.

Luckily, tech companies that develop the services you use every day, like Google and Facebook, also want this to happen, and have been developing AI that takes up less space and runs faster, which is optimal for mobile devices. That means whether it be predictive text or photo manipulation, it can happen faster by using your phone’s resources instead of their server space. Of course, smartphone makers also have an interest this, as AI technology like virtual personal assistants have become defining features between devices.

Last week, Facebook open-sourced a new version of its AI that understands language, FastText. To get it to work, Facebook had to break down language into smaller bits that machines could understand. The way we communicate, through words arranged into sentences, is inefficient and difficult for machines. Deep neural networks, excel at representing complex ideas by breaking them into pieces that relate to other ideas.

These are called vectors, long strings of numbers that enumerate how ideas exist in relation to one another. The words “cow” and “horse” are similar, because they’re often seen in the context of  farms. But “cow” is related to milk more closely, and “horse” is related to the Kentucky Derby, so they’re placed a little farther apart.

Facebook took those relationships and simplified them. Instead of 10,000 connections between farm animals, Facebook needed just 50 meta-connections. While this seems drastic, the researchers told Quartz they were surprised at how well the network adapted to simplification. It allowed FastText to be scaled down from gigabytes to kilobytes—and suddenly it became entirely reasonable to run on a phone. That allowed things like searching through Facebook posts, suggesting hashtags, or even automatically moderating content, to take place on the device, rather than in the cloud, making all those functions faster, more secure, and able to be performed without an internet connection.

Facebook took a similar approach with style transfer, which uses AI to apply the aesthetics of one image to another. That was enabled by its deep learning software for mobile devices, called Caffe2Go, which was announced in November 2016.

Google has also been shipping smaller and smaller neural networks, including text features for its Smart Reply on smartwatches. By taking this technology and embedding it into messaging apps, the company says it can offer the features that AI allows without having to make personal messages visible to its servers.

Andrew Ng, former chief scientist at Baidu, noted the trend yesterday in a tweet:

The IoT devices that Ng is referring to, like smart home security cameras and doorbells, also benefit for the same security and functionality reasons. Making voice recognition take less compute power could mean Alexa answers a question faster, or a home security camera could detect intruders if the internet is out. Of course, there’s nothing saying that next week even better, stronger AI will be developed, only possible to run on enterprise servers.

At least until that’s reduced as well.