Gesture control: Why 2013 is the year when we stop clicking and start waving

Behold, the mighty hands.
Behold, the mighty hands.
Image: John Shearer/Invision/AP Images
We may earn a commission from links on this page.

The shot in the gesture revolution was Nintendo’s motion-sensing game controller for the Wii, followed quickly by motion capture with infrared cameras on the Xbox Kinect. Last year, nerds praised the not-yet-launched Leap Motion, which allows for gesture-based interaction with devices such as laptops. Now, a Y Combinator-backed company called Thalmic Labs has announced MYO, a cuff-like input device that uses electromyography, or detection of muscles’ electrical signals, to detect gestures and control a range of devices. Taken together, Leap, MYO and a raft of new input and interaction technology will make 2013 the year when we stop clicking and start waving.

MYO’s mixture of gesture recognition and muscle twitch detection gives it the ability to sense individual finger movements, as well as arm and hand gestures. This raises the stakes in the gestural interface game, and, with its reliance on Bluetooth for a short-range wireless connection, allows the wearer to step away from the desktop and control devices—not just presentations, but mobile-controlled objects, like the quadcopters shown in the video above. Considering the growing number of items managed from an iPhone, iPad or other remote device, MYO could bring more fine-grained gestural interaction to management of the home, and, well, basically anything a developer can connect it to. At launch, MYO will support interaction with Windows and Mac OS, with iOS and Android support to come, according to the company.

MYO, shipping in late 2013 for $149 (Leap just announced May availability for $79.99), is another step in fulfilling the Spielberg-incepted ideal of hand-waving orchestration of the myriad of smart devices in our lives, set in motion by “Minority Report.” The film, and its iconic scenes of Tom Cruise managing review of recorded premonitions of a crime, has fixed in our heads what “natural” interaction with hands and fingers should look like, just as Star Trek incepted the outlines of future personal devices in our heads. Some already argue this isn’t natural interaction at all, but a strange language of flicks and waves that has taken on a life of its own.

Kickstarter has become popular marketing ground for new input technology, from MAUZ, a new entry that turns a mobile device into a wireless gestural input, to MaKey MaKey, a lo-fi assemblage of alligator clips and open-source Arduino processor that turns everything from a banana to a sheet of paper into a controller. The buzz created by Leap and other early entrants into the gestural interaction race will undoubtedly spur more entrepreneurs to find clever, simple ways to enable us to poke, prod, and play with our increasingly interactive physical environment.

This also means we will be surrounded by an increasingly strange new interaction lexicon, moving from the intimate one-fingered drag and two-fingered pinch to sweeps with the hand to mime-like pops of the wrist and waggles of the palm. With its focus on muscle movement, MYO broadens this into a range of full-arm gestures, somewhere between playing a Theremin and performing Greek drama in the round.

As we zoom further outward, not only to conducting our personal Internet of Things to playing sorcerer’s apprentice, casting our microdrones on their way, this language moves from small ritual to a kind of mad semaphore. The next few years will bring us a veritable Babel of flicks, twists, sweeps (explored in this video from Arts Center College of Design) and, as suggested with early trials of Google’s Glass, head nods and eyerolls, making the morning subway journey less like a packed can of sardines, and more like performance art.