While Google Glass may have the public’s interest in headwear, it still can only run its own applications and requires eyeframes specd to fit the Glass. Not to mention that Glass itself is driven primarily by voice and hand gestures; there is an experimental “glance” command, and the rudimentary awareness of when your head is at a certain angle to wake Glass up from sleep state.

Uday Parshionikar demonstrates his hands-free device playing Angry Birds

Uday Parshionikar demonstrates his hands-free device playing Angry Birds

Uday Parshionikar of Perceptive Devices has a different idea about wearing a device on your head, and, in fact, has developed the technology that could be attached to a Google Glass to give it the ability to run other computers, gaming devices… anything that uses mouse clicks or gestures. But here’s what’s different: it’s all hands free, controlled by sensors monitoring the movement of your eyes, head, and facial gestures, particularly smiling, to control the device.

Here’s a video demonstration, recorded at #CEWEEK 2014 in New York City, where he was demonstrating a prototype of the technology:

[youtube url=”http://youtu.be/SgWpyZiwAfE” fs=”1″ hd=”1″ rel=”0″ showsearch=”0″]

The prototype seemed to do well in the demonstration, managing basic navigation on the web, straightforward editing in Word, and playing Angry Birds. I look forward to seeing the more complete version, which I would probably clip onto my Google Glass for completely hands-free control. However, I still would use a pressure sensitive tablet for drawing–some skills can’t be replaced that easily.