Ah interesting. I thought the pattern was to send out a known measure of light evenly spread out over the room so you can judge the distance by the intensity of the reflection. I guess it works too by measuring the spacing of the projected dots that come back. But can you do that with a 320x240 image?
The end result is the same anyway, dots closer together, object is closer, more light comes back.
How the two systems work together I think is to use the distance map to cut out the person, then feed that to the image recognition software. The biggest advantage of kinect over eyetoy is the vastly surperior background substraction you get with the distance map.







