Video Gesture and Pointing Recognition

Video gesture and pointing recognition devices are using even more sophisticated methods to interact with the computer. Therefore the basic background behind these technique is are the human gestures and motions with which these devices are controlled. It can be seen from the recent developments that there are two types of applications for video recognition devices. The disparity is displayed in the mobility
aspect. As mobility in SixthSense and Skinput plays an important role in this two devices, g-speak is mainly designed for stationary purposes. Nevertheless all three types have their potential to become a future device for Human Computer Interaction, but lets start with the possible appliances of the g-speak tool.

Oblong Industries have designed with their tool a complete new way to allow free hand gestures as input and output. Beside the gestural part they also constructed this platform for real-space representation of all input objects and on-screen constructs on multi-screens as shown earlier in a figure . With this tool they are not only using the so called mid-air detection which recognizes the human hand gestures and operates the interface but also a multi-touch tabletop pc is used as described in the previous Section. Skinput as it is introduced in a previous Section a new way how interaction with human fingers can look like in the future. Skinput was designed to the fact that mobile devices often do not have very large input displays.

Therefore it uses the human body, or more precisely the humans forearm to use it as
an input with several touch buttons. As it can be seen for now on this system contains some new interesting technology but is limited to pushing buttons. Thus it will fi nd its use in areas where an user operates an interface with just buttons and so it is not very likely that it will replace the mouse or keyboard in general.With the prototype of Sixth Sense Pranav Mistry, are demonstrating in a fascinating way how it is going to fi nd its usage in future human computer interaction.

Map navigation on any surface
Map navigation on any surface
Detailed flight information
Detailed flight information
Keyboard on the palm
Keyboard on the palm

 

 

As already mentioned this tool uses human gestures as a input method but it off eres a lot of more possible appliances. As only a few examples from many it can project a map where the user then can zoom and navigate through, it can be used as a photo
organizer or as a free painting application when displayed on a wall. Taking pictures
by forming the hands to a frame, displaying a watch on the hands wrist or a keyboard on the palm and displaying detailed information in newspapers or actual flight information on  flight tickets are just a few of the many opportunities. Some of these
applications are shown in the Figures. This list of many appliances highlights the high potential of this new technology and with the up-to-date components in the prototype with about $300 it is even a fordable for the global market. Thus this device delivers most likely the highest probability to be the first to push into the market and become the beginning of the future interaction with the computer.

Leave a comment