Google Glass will be the next Apple Newton

What can Google Glass do better than my iPhone?
What can Google Glass do better than my iPhone?
Image: AP Photo/Seth Wenig
We may earn a commission from links on this page.

This originally appeared on LinkedIn. You can follow Andrew Chen 

Turns out nerds are crazy about Google Glass

Recently, I tweeted the following:

I’m a google glass skeptic. Who’s with me?

Turns out most people think Google Glass is going to be awesome. Frankly, I was surprised—I figured it would be more balanced. But it turns out that people are more excited about the idea of Glass than any particular use case. And I’m excited about the product category too, but think version 1 (v1) might suck.

Google Glass is the new Apple Newton

One day wearable computing glasses may turn out awesome, but I’m convinced that the Google Glass will be like the Apple Newton—a visionary product well ahead of its time, and maybe 10 years after its release, someone will figure out how to make it mainstream using a different design. Regardless of whether v1 is good, all the investment in wearable computing is certainly exciting. What nerd doesn’t want to fulfill his or her dream of being a cyborg? And within a few iterations, it may be that the industry will come out with a v5 that is awesome, just the same way that the iPhone/iPad eventually fulfilled the dream of the Newton. Let’s hope that becomes the case, but in the meantime, I’m not optimistic about the v1 of Google Glass.

Is it better than smartphones?

My skepticism is rooted in one idea: For $1,500 (or $1,000 or even $500), the Google Glass will have to do certain tasks significantly better than the smartphone to justify the price. And in the next two years, it may have to compete with many other devices like wearable watches that fulfill some of the same tasks, too. And I’m skeptical that there’s enough tasks where it’ll be worth it, and I’m skeptical that using voice as the primary input will be good enough to drive the whole interface.

Beyond the idea that it’s cool, you have to ask:

In what tasks does Google Glass actually perform better than a smartphone?

And I don’t think there are enough use cases to make this work.

Looking at the use cases

One data point on this is to watch the recent Glass marketing video to find out all the use cases it demonstrates. But let’s try to ignore all the awesome acrobatics and beautiful scenery, and just focus on what people are actually doing with the user interface:

List of use cases

Here’s my list of what people are doing on Google Glass:

  1. show the time
  2. record video
  3. send message via voice
  4. start video conference
  5. search Google images
  6. get the weather
  7. take a picture
  8. get directions on a map
  9. get flight details
  10. translate “delicious” to Thai
  11. look up something on Wikipedia
  12. share a photo

(Of course, it should be noted that part of why Google is creating this new developer preview is so that more apps can get written—but in that case, it’s fancy technology looking for a use case)

Glass versus phone (or other cheaper wearable devices)

The biggest issue with the above use cases just aren’t significantly better with a computer attached to your face rather than the computer you carry in your pocket. Most of these are basically simple things you can already do on your phone—checking the weather, the time, etc. There’s a small collection of things I’m convinced will be a lot worse, like searching for stuff or sending texts to people, because voice input is still weak. And then there’s a small set of things, like taking POV photos or looking up maps, where Glass can really offer a better experience. Are those enough?

Voice sucks as the primary input

In particular, I’m skeptical of voice as the primary input. I think it’ll doom the product in the same way that horrible handwriting recognition doomed the Apple Newton. The state of the art on voice input, frankly, really sucks on both Android and iOS. Have you tried to compose a message that wasn’t “ok” or “coming home” via voice? Especially in a noisy cafe or on the bus? Plus, people are going to seem like crazy folks, talking to themselves over and over again, trying to coax their devices to do what they want.

(You can easily do an experiment on this by trying to do everything on your phone without touch for a while—you won’t last long, it’s super frustrating.)

It may be that Google has some new magic voice capabilities to release as part of Glass, yet at the same time, wouldn’t the company bring it to the 500 million Android devices first? And if the magical voice capabilities on smartphones get better, won’t it erode the differentiation of using the devices versus Glass?

I hope it works

Ultimately, my final point on this is that I hope it all works. I haven’t used Google Glass yet, and will be really excited to try it out. I hope it works. But rather than being wowed by just the idea of wearable glasses, I think it’s important to start talking about developing the actual use cases. How will people interact with this thing that will make it an amazing experience? Especially in the context of all the other wearable computing devices we’re sure to carry with us—phone, watch, Fitbit, Nike bands, etc. And those are the kinds of questions we’ll need to answer to really push the next generation of devices forward, rather than just make really awesome gadget porn.