During a non-stop, two-hour keynote address at its annual I/O developers conference, Google unveiled a barrage of new products and updates. Here’s a rundown of the most important things discussed:
Google CEO Sundar Pichai kicked off the keynote by unveiling a new computer-vision system coming soon to Google Assistant. Apparently, as Pichai explained, you’ll be able to point your phone’s camera at something, and the phone will understand what it’s seeing. Pichai gave examples of the system recognizing a flower, a series of restaurants on a street in New York (and automatically pulling in their ratings and information from Google), and the network name and password for a wifi router from the back of the router itself—the phone then automatically connecting to the network. Theoretically, in the future, you’ll be searching the world not through text or your voice, but by pointing your camera at things.
Google announced a second generation of its specialty AI hardware, called tensor processor units (TPUs). While the first generation of TPUs were only for internal company use, the technology will now be available for use on Google’s burgeoning cloud service business. The move allows Google to offer a faster cloud service to companies, but what it really shows is how important the company thinks the cloud business is to its future. Google is betting that its AI tech can set the company’s cloud service offerings apart from competitor—and the company wants ensure it won’t need to rely on third-party hardware vendors to keep its AI advancing.
Through Google Lens, Google’s digital assistant can now understand the world as you see it. If you open it up and point the camera at the marquee at a music venue, it can recognize the band name, play their music, and even book you tickets to the concert.
Assistant is now available on the iPhone from today (it had previously just been available on certain Android phones, and the Google Home). Google said that it’s also launching an Assistant developer kit, so that third parties will be able to build devices that use Assistant for voice interaction. This coming holiday season, expect to see advertisements for all sorts of speakers, phones, and internet-of-things devices with Assistant built in.
Google is also rolling out Assistant in a range of new languages, including German and Korean, and will soon be giving developers the ability to build purchasing tools for the system. Google showed off an example: it placed an order for a Panera salad through Assistant, nearly as easily as one might speaking to a real person working at the restaurant, even asking for (and getting confirmation of) substitutions.
At last year’s I/O, Google unveiled Home, a smart-home speaker and answer to the Amazon Echo. At this year’s event, it learned a few new tricks. Google announced that Home will soon be able to provide users with proactive notifications—for example, if you have your work address saved in Google Maps, it’ll notify you if you need to leave earlier than usual for your usual commute. The Google Home will light up when it has something it wants to tell the user, and they’ll just have to ask what it wants to say.
The Home will also soon be able to make calls to any phone in the US and Canada, about a week after Amazon announced that its Echo speakers can call anyone else a user knows who also has an Echo. But Google takes Amazon’s effort further by allowing calls to any phone, and by using contextual understanding of knowing who’s talking to it. For example, if you ask a Home to call your mother, it will call her, but if your spouse says asks the same question, Home will know to call hers instead.
You’ll also soon be able to view content you’ve asked for on a Home on other devices. For example, if you ask Home for directions to your next meeting, it’ll automatically send directions from Google Maps to your phone. You’ll also be able to view information you ask Home on any screen that you have connected to a Chromecast streaming device.
Google said that over 500 million people now store photos through its Photos app. The company unveiled a few new updates to the app, including one that can find photos to send to friends. The app’s AI looks at photos you’ve taken that you’ve not shared, and decide which ones the people in them might want to see—which Google says will help you be “not a terrible person.”
Google also unveiled “Photo Books,” where users will be able to ask its AI to trawl through all the photos it has of a person and automatically generate a picture book using them. They can then pay $10 to have a physical version of that book mailed to them. The age-old thinking when it comes to gift-giving is “it’s the thought that counts;” now you can let Google’s AI do the thinking for you.
Google will also be bringing its Lens technology to Photos—you’ll be able to tap on any photo you’ve taken and the app will automatically try to find pertinent information in the photos, such as phone numbers to call, or history of the buildings in the image.
Google announced the latest iteration of its Android mobile operating system, Android O—the company didn’t say what the “O” will stand for, but it’s looking likely it will be “Oreo,” after the chocolate-and-cream cookie.
The Android team blew through a range of new updates, including new emoji, picture-in-picture video, security enhancements, and more AI capabilities built into the device. The new operating system will also be able to auto-fill logins and passwords you’ve stored with Google on other devices. For example, if you’ve logged into Twitter on Google Chrome on a laptop, when you install the Twitter app on an Android phone, the phone will suggest you log in with the Twitter account you’ve already used. Android O will also likely increase Android devices’ battery life, through efficiency enhancements to the way the operating system computes.
The company didn’t give a launch date for the new operating system, but Google has tended to have new product announcements in autumn, so it’s likely we’ll learn more then.
Google will release a lighter version of its mobile operating system for devices with less than 1 GB of memory, called Android Go. The operating system is intended mainly for developing nations—Google said that there are now more Android users in India than there are in the US. The operating system will help users without robust, constant internet connections—for example, it will include YouTube Go, a version of the streaming app that lets users save videos to watch when they’re offline, to conserve data usage, and share videos from one device to another, without a data connection. The first devices with Android Go will ship in 2018, the company said.
Google did not spend nearly as long discussing virtual reality as it has at past I/O events, but it did say that a new type of device would be launching for its Daydream VR platform announced last year. It’s working with HTC (the smartphone company behind the Vive VR headset) and Lenovo to build standalone VR headsets that don’t rely on a smartphone for their power. HTC said in an email to Quartz that its new headset will launch later this year.
Google also further teased a technology it has been working on for years, called Project Tango. It’s a form of augmented reality that can understand the world around you and provide information to you as you walk through an indoor location. Google called it “visual positioning service,” or VPS, intended as a play on GPS—the idea is the technology is meant to orient you indoors much as GPS does outdoors.
In an onstage demonstration, Google showed a future where you could hold up your smartphone as you walk through a hardware store, and be directed by Google to the product you’re after, without ever having to talk to or find an attendant. The company also showed off new augmented-reality features intended to help students learn, similar to the educational software for virtual reality school trips it unveiled in 2015.