By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ArnoldRimmer said:

I'm sceptic about this. As far as I understood the article, it's pretty much just speculation by Eurogamer.

There are more problems than just USB bandwidth. First of all, using 640x480 instead of 320x240 from the depth sensor would mean the Kinect algorithms would have to process four times the data. If they weren't magically able to massively reduce the processing time Kinect needs, such a step would make Kinect's performance even worse because it would increase the lag even more.

The other problem is that the real resolution of the depth sensor would only increase by a small amount. On a normal webcam/digital camera, using 640x480 instead of 320x240 does of course result in four times the resolution. In the case of the distance sensor inside Kinect however, the real resolution is neither 640x480 nor 320x240, it's the number of infrared points the device projects into the room. (as seen in this video for example: http://www.giantbomb.com/forums/general-discussion/30/what-kinect-looks-like-through-ir-goggles-points-of-laser-light/464521/). And that number doesn't even increase by switching from 320x240 to 640x480. So using 640x480 would increase the accuracy a tiny bit, but the disadvantage of additional processing time might easily outweigh the small accuracy advantage. Most people would probably prefer the current accuracy with less lag than slightly better accuracy with even more lag.

Depends on what the developer wants, If it's just a dance game then they don't need to process that many pixels to track the skeleton.  You want to do just the fingers then you can use the lower res buffer to find them and on the next pass use the higher res to read the fingers and ignore the rest of the screen. It's sort like the new Anti aliasing and other post processes that only need to find the edges in the image to work. Smart work is faster work.