In its current incarnation, Google Glass is very much a beta, possibly even an alpha—in other words, a prototype that Google happens to be selling for $1,500 to a limited group of trial users. But as technology does its thing—smaller, faster, cheaper—and Google (or someone other than Google) figures out better ways to connect the computationally puny innards of face-based computers to smartphones and the cloud, they’re going to be able to do pretty much anything a PC can do, only from the bridge of your nose. Here’s our near-future wishlist for apps that would make Google Glass truly useful.
Machine translation has decades to go before it’s as good as the human kind, but for basic interactions—shopping, asking for directions, terrible pick-up lines—something like Google Translate is more than adequate. Imagine having all the conversations around you subtitled in more or less real time.
There are lots of smartphone apps that allow you to scan a product’s bar code and see if you can find it more cheaply elsewhere. They include Amazon’s Price Check and Google Shopper. But all of these apps require you to pull out your phone. What if you could scan a bar code just by looking at it, and instantly receive an estimate of what you would pay to have the same product shipped to your house, and when it would arrive?
Musicians should be excited about the potential of Glass. Chief among its applications for them would be displaying sheet music on the Glass screen so a user could play from it. Showing two measures at a time might be a way to allow for easy visual scanning by the musician. The music could recognize notes played (using audio recognition technology employed by Shazam and others) and automatically advance as it hears the notes. The app could provide feedback on notes that the musician misplayed. Other possible functions include a metronome and a tuner for string instruments.
There are pen-based computers, but one barrier to their use is that they lack an interface other than, well, paper. Meanwhile, one problem with Google Glass is that its main interface is a touchpad on your temple and somewhat-unreliable voice input. But what if you could use the built-in camera to control it in a more sophisticated (and discreet) way than talking at Glass?
Imagine you’re sitting in a meeting and someone mentions a bit of market-moving news they read that morning, and you’re curious to know more. On any piece of paper, you scratch out “search: [keywords go here]” and Glass instantly pops up relevant headlines. The same trick would work with Wikipedia, proper nouns, and anything else you’d like to look up. Of course, Google Glass could also photograph and store your notes, and transform them in useful ways, such as to-do items in a task manager or dates in a calendar.
Moving beyond notes, the versatility of pen and paper opens up a seemingly limitless number of ways to interact with Glass. Writing out “Message Jim [text of message]” could turn your hand-written note into an IM to someone in your address book.
At present, Glass is mostly about putting information just outside your central field of view, but as it becomes integrated into eyeglasses, it could project useful interfaces, like keypads, onto any surface you encounter. Walls, tables, the palm of your hand—all could be touch interfaces.