WereKitten said:
I sort of assumed that there was an internal separate "hand tracking" mode that used a detailed hand model instead of a full body skeleton, for the times when you needed to use a cursor-based interface in menus, media galleries etc. Partly because I always assumed that MS wanted to push the very same tech for PCs in education, kiosks, mediacenters, and the very least you would need for universal operation is a precise multitouch pointing feature. The way it is formulated I read as "a cube of approx. 4 cm side", but if it's a reference to the volume then it's not immensely better: cubic root of 4 cm^3 is approx. 1.6 cm. If it were to read the horizontal/vertical positioning of your hand to move a pointer on-screen that would translate, if you move your hand in a 80cm*50cm (x*y) range in the air, to an input resolution of about 50*30 "points". That would mean a step of almost an inch on your typical screen... only good for the roughest interface, and probably maddening if you ever have to select say music or videos from a list. If the x/y resolution is 4 cm, that's even worse and you have to cut that in more than half, with on-screen steps of an inch and a half. Using the angle of your arm (say shoulder-to-wrist) doesn't solve much: unless we're understanding the whole 4cm statement wrong, the angle computed from two points will have an even worse resolution (about half as much, actually). Averaging helps, of course: at 30 samples per second you can average over, say, the last 10 samples and still have a somewhat responsive cursor. That brings a (square root of 10) factor to your resolution boosting it up to about 150*100, so that tracking the position of your hand in a vertical plane in the air will have an effective resolution of about a 1/5" on-screen. Unless it is 4 cm per side, then we're back to about 70*45 and half an inch. Either way, it might be acceptable if the interfaces are designed for a rough input as in big buttons, highly zoomed in lists etc (think a smartphone interface). It is totally unacceptable as an input method for a shooter, though. Oh well, maybe it's all wasted back-of-envelope math anyway. Let's wait to see more precise specs :) |
I am not sure if Natal is programable for different modes. Kodu stated that his fingers would be able to be detected, but a childs would not. And I imagine finger detection would require someone to stand close.
To be honest, I think there will have to be an on-screen "hand" with any Natal GUI interface. In order to be able to directly point at a screen, Natal would need to know where the screen was. The "hand" would need to be relative to something on your own body, perhaps your head, and the tranlsation from relative body distance onto screen corrdinates would need to be done. Everything with Natal will need to be relative to your own person, otherwise Natal will need to be build into the screen. In this case, the exact position isn't necessary. You would just be moving the "hand" relative to where it was last. It might jump around a little bit, but this could be fixed with some blurring algorithms.