bobbert said:
According to the article, it takes the xbox GPU less than 5ms to run the algorithm on each frame. That's how they come up with 200fps, and they used the xbox/kinect to come up with that number. It IS after data transfer and processing. That 0.002 is some made up number because someone doesn't know how to put 1/200 in a calculator. There still is the time it takes to aquire the image and so forth. But either way, the lag is about one TV frame (30/s) total so it's essentially gone. Well, theoretically it is gone. The paper does say that in real world tests that there is an improvement but it's not perfect yet. |
Well, to be precise the paper states that the "light" version of the new algorithm can run at 200fps on the X360 GPU, and by the context that would mean that it can run at 200fps with 100% utilization of the shaders. Once you time-share them, of course you go up from 5ms for the "recognition" step to some other more realistic value that will be at least 3 or 4 time bigger.
Now, I'm not sure how much latency is introduced by the current skeletal recognition in default libraries under the same conditions (I understand they use temporal data, so they must at least work over a camera capture time of 33ms). So I can't say if 15-20ms is a great breakthrough or a smaller incremental improvement, and if this improvement can really bring you down one frame in real cases.
We can all agree, I think, that "kinect lag reduced to 2 ms" title was bollocks, though :)







