By using this site, you agree to our Privacy Policy and our Terms of Use. Close
kitler53 said:
selnor said:

Ok, to stop this crap. Here is the video from CES 2010. It was shown at 6:30pm Jan 6th 2010 in Us time. 2:30am UK time. Natal uses ardware to do exactly what was said at E3. To fully track the body in realtime is all done on hardware and confirmed less than 24 houtrs ago. The only thing software does is enable to interpret where the limbs are and what angle the joints are at. EVERYTHING ELSE IS HARDWARE! Natal hardware itaself evalutes trillions of body configurations every frame. Itself. On it's own. No CPU used from 360. Is everyone disregarding CES keynote altogether????????????????????

It says for limb interpretation and joint angle is what software is used for. Thats IT! Go to 1:10 on the video, then someone close this thread.

 

i appoligize, when in that video is there any explicit information about the physical locaction of where any processing power comes from because i went to 1:10 and i didn't heard anything specific towards any conclusion.


 lets use a bit of common sense.

He says very clearly, " The 3d camera available for the first time gives it the information to the distance of every point on the body ( later another employee clarifies this as trillions of times every frame and 30 frames a second ). But it doesn't give an interpretation of where the limbs are and what angle the joints are at. So for that we need to build software. The person 'confirming' Natal is the one evaluating trillions of body configurations a second is 'Andrew Fitzgibbon'. Clearly saying " What Natal does is effectively evaluating trillions of body configurations a second."

Now. it doesn't take a high IQ to notice that the software is confirmed as doing interpretation of where the limbs are and joint angles. Thats it. According to the article in OP Software is doing all the body configuration calculations also. Well this video 'VERY CLEARLY' dispels that as total BS.

Have a nice day. :)