By using this site, you agree to our Privacy Policy and our Terms of Use. Close
daroamer said:
Smidlee said:


This is simply wrong.

It's not tracking "colour" at all.  It's tracking DEPTH.  It's working more like sonar, the Kinect IR transmitter is bathing the room with IR light,  that light is bouncing back to the depth sensor.  The closer something is to the camera, the brigher it appears.  Look at the IR image again, as stuff moves away from the camera it gets darker.  It's clearly evident that the brightest thing in the image is the pillow or whatever on the bottom right and the darkest thing is the far wall.

If you were to light the scene with a bright light on the wall behind the people, an RGB camera would see them as darker than the background and according to you it would think the wall was closer than they are.  The IR camera would still see the scene with the correct depth as it's not seeing visible light at all, it's ignoring all those wavelengths.

The ability to read the depth of the scene is what allows it generate a 3D map of the scene.


Like I said, you obviously have no idea what you're talking about. That yellow image is a computer generated image of the distance to each object. The IR emitter is more like a projector that emits patterns. The patterns get reflected off of each object differently and get detected by the camera. The chip then performs a mathematical analysis on the resulting distorted pattern to determine distances to different points in the room.

It's clearly evident from the color scale of the computer generated image, that the chip can properly decode the pattern and produce accurate results of the distances to each object.

It's not like sonar because sonar uses time-of-flight. Time of flight for light requires very expensive hardware and could not use a standard webcam. It is this technology which conquers a huge mathematical challenge, and does so in a very short amount of time with inexpensive hardware that made the Kinect feasable.