Google has replaced emasculating smartphone fondling with compulsive eyeglass fiddling. At the SXSW technology conference in Austin, Google put on the most ”real” demonstration of Google Glass yet, which involved a taxing array of swipes directed at the user’s temple, since the primary way to interact with Glass is through a tiny touchpad attached to one arm of the glasses-mounted headset.
Here’s a Googler showing off what Glass can presently do:
What’s most interesting about this demo are the use of what Google calls “cards” that people will be able to swipe forward and backward to get to different items. Swiping deep into your “history” on Glass gets access to search functions like Google Now, as the instantaneous search feature is called. On certain cards called “bundles,” tapping on the touchpad gets the user into something like an app (say, for the weather).
The way that Glass accepts “text input” is through voice recognition. Saying “OK, glass” wakes the headset up and lets it know that whatever you say next is a command. This is a good way to Google things, but taking a picture this way, by saying “Glass, take a picture” seems cumbersome.
Google also announced a handful of new apps for Glass: One for the New York Times, which will display headlines and read stories aloud; Path, which is sort of like Facebook for people who just can’t get enough sharing; and Evernote, a popular note-taking app.
While it’s too early to say how Glass will perform in the wild, it sure seems as if Google has simply replaced one pretty-good interface—the touchscreen—with what could be a frustratingly limited alternative that has the additional drawback of living on people’s faces, creeping out everyone around them. Like countless technologies before it, Glass will succeed or fail on its ability to uniquely enable new ways to get things done.