Welcome to the future of mix and match user interfaces

We have gone from primarily using a keyboard and mouse to interface with our computers to adding voice, touch and a bit of gesture-recognition to the mix. And while there will always be a special place in my heart for bunch cards, switches and other more primitive interactions, we’re about to blow the founding four interfaces up in a crazy way (GigaOM Pro subscription req’d) thanks to a variety of research that basically wants to open up the communication pipeline between people and computers.

From the Leap Motion to the Myo arm band that measure muscle movements we’re getting access to a varied world of new ways to interact with computers. Perhaps you’ve seen the Skinput project that turns the human arm into a keyboard? Or played with a projected interface that lets you put a touchpad on any surface? What about the Reemo I saw demoed at Solid last week, that turns an arm motion into a means of controlling your VCR?

Here in my hometown of Austin, Plum Lighting is working on a light switch that let’s you create gesture-based inputs for any lighting setting you want. Want a section of Hue lights to turn pink? Trace the letter P on the pad and it happens. The whole field of UI is exploding, which is why this profile in the IEEE Spectrum magazine of some work on the WorldKit effort at Carnegie Mellon caught my eye, especially this section of how we’re going to be surrounding by interfaces wherever we are.

In fact, future consumers may choose from many different kinds of interfaces, mixing and matching to satisfy their style and fancy. Other researchers are now investigating smart glasses that track eye movements, wristbands that capture hand gestures, tongue implants that recognize whispers, even neck tattoos that pick up subvocalizations—the subtle vibrations your throat muscles make when you imagine how a word sounds.

As a consumer I’m still wrapping my brain around the best options for different parts of my smart home, but if I were a developer I’d be both excited and overwhelmed by the opportunities these UIs present. Libraries that make it easy to translate a single computer-recognized meaning across multiple UIs might become very popular in the future. Or maybe as users we just program our personal faves for each task. What do you guys think?

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.



More From paidContent.org

Advertisement