datasheets.com EBN.com EDN.com EETimes.com Embedded.com PlanetAnalog.com TechOnline.com  
Events
UBM Tech
UBM Tech
Welcome Guest Log In | Register

Gesture Sensor for Mobile Devices

Authored on: Sep 2, 2013 by KyongSae Oh et al

Technical Paper

2 1
More InfoLess Info
Gesture sensor is used to detect and recognize meaningful gestures from the parts of the human body such as hands, arms, face, and head. Most of the gesture sensors are mainly based on image sensor. Image sensor technologies have been focused on depth information. However, after the advent of smart phone, other technologies such as touch IC and proximity sensor are entering in the gesture sensing market with low power consumption. Mobile devices need high accuracy and low power consumption. This white paper provides a survey on gesture sensor and its trends for mobile devices. This paper also deals with existing challenges and future research possibilities.

Please disable any pop-up blockers for proper viewing of this paper.

1 comment
write a comment

Terry.Bollinger Posted Sep 13, 2013

This is an incredibly important topic, and this article does provide a pretty good overview of the range of possible solutions. Unfortunately, it is also a bit hard to read, coming over in some cases more like a list of bullet points converted into paragraphs than as an actual article. Also, a few sections look like they may have been mechanically translated without the benefit of subsequent verification by a knowledgeable English-speaking editor, e.g. "Image-based 3D technologies are compatible to conventional system. Therefore, it makes vibrant researches in academia and the industrial world." Eh? Coverage of competing, non-Samsung technologies was at times weak, e.g., the very relevant Leap Motion devices was shown in a figure (Figure 5), but absolutely no mention of it was made in the article! Considering that a main theme of the article was that something called Dynamic Vision System (DVS) will be the wave of the future for mobile devices, the coverage of that idea came very late in the article, in the last two pages. The idea is remarkably simple: At the pixel level, a DVS imaging chip only "sees" transitions in light levels, with unchanged areas producing no signal. That is exactly what human vision does, incidentally. As with human vision, this enormously decreases the amount of data that must be processed to derive useful information from it, so you end up with faster and more energy-efficient detection of all forms of motion. (Dare I use the phrase "bio-inspired" here? It fits!) Apparently the dynamic range of their chip is also quite good, so in the end, I think the Samsung authors end up making their DVS case pretty well in spite of some distracting issues with the way the article itself was written. DVS really is a technology to that needs to be watched, and watched carefully, by anyone making smart phones and pretty much any other type of smart, small sensor device. Finally, though it was mostly an aside in the article, I was impressed by the early figure that showed the range and diversity of sensors hidden away in the Samsung Galaxy S4. Wow! Clearly S4 apps have not remotely caught up with the full potential of having so many local sensors in so many smart phones. Here's just one example (note that I've not looked to see if anyone has already done it): If users over a large area chose to share weather-related sensing via some some hypothetical "We Share Weather" app, my guess is that a distributed forecasting capability attached to the same phones could in real-time develop and redistribute hugely more specific and detailed local weather predictions than will ever be possible using traditional forecasting. For someone in Oklahoma or Texas, that could literally be a life saver. (Not my bailiwick: Feel free!)

reply

Please Login

You will be redirected to the login page

×

Please Login

You will be redirected to the login page

×

Please Login

You will be redirected to the login page