Google attracted lots of attention by releasing a concept video showing “Google glasses” in action, and co-founder Sergey Brin even wore a prototype in public. But whether or not Google’s Project Glass proves to be a breakthrough in providing a heads-up display, it marks the end of the wearable computer idea.
Wearable computers have been around for a long time, thanks mainly to a Canadian called Steve Mann. He started pioneering the idea in the late 1970s, and he was one of the founding members of the Wearable Computers group in the Media Lab at MIT, where he did his PhD.
In the early days, before laptop computers were invented, wearable computer buffs had to cope with equipment that was both heavy and bulky. A lot of ingenuity went into packing things into custom-made “vests” or otherwise stowing them around your body. Eye-level screens were only a few centimetres across.
Since then, of course, all the components required for wearable computing — computers, display screens, video cameras, GPS units, motion trackers and so on — have shrunk dramatically in size while also becoming much more powerful. In fact, a lot of us carry all these devices all the time without even noticing, because they’ve been reduced to a small, portable device: the smartphone.
A smartphone isn’t quite the same thing, because wearable computing was based on the idea of “computer-mediated reality”. The computer can provide information overlays to identify buildings and people, show directions and so on, as shown in Google’s video. This is “augmented reality”. Arnold Schwarzenegger’s Terminator found it useful.
It can also involve replacing some or all of your view of the real world with a computer-generated “virtual reality”. This could range from replacing advertisements with pictures of cats to providing a whole fantasy world like something out of the Avatar movie.
The real question is whether we would want to use our “augmented reality” occasionally, on demand, or whether we’d need a continuous display. If on-demand is enough, we can simply hold up a smartphone and use its built-in camera, GPS, accelerometer, internet connection and so on. Information can easily be overlaid on the smartphone screen image, whether it’s walking directions or a guide to the night sky.
Dozens of apps already do this.
Having a continuous read-out requires some form of head-mounted camera/display system, probably using Bluetooth to relay information from the smartphone. That’s where Steve Mann’s EyeTap started in 1981, and it’s what Google’s glasses are promising for the future. The closest thing you can buy today is Brother’s AirScouter headset, which is aimed at industrial applications. This doesn’t have a screen: it projects the screen image directly onto your retina. I don’t expect you’d want that all the time.
The problem with headsets is that only a few geeks think they look cool. Most people don’t even like wearing ordinary glasses, which is why the supply of contact lenses has become a big business. The 3D movie industry is also stumbling over the problem of getting people to wear special glasses for a couple of hours, in the dark. Unless headset suppliers can make them look exactly like Ray-Ban Wayfarers or Oakleys, they’re doomed.
In any case, headsets should only be a temporary stage in the development of augmented reality. We’ve seen wearable hardware get smaller over the past three decades, and that will continue for the next three decades. Eventually, your smartphone will be a couple of millimetres square, so it could either be installed in a contact lens or implanted under your skin. If you fancy having a “third eye” in the middle of your forehead, that could be your cameraphone.
Jack Schofield