By using this site, you agree to our Privacy Policy and our Terms of Use. Close
daroamer said:
NJ5 said:
I think it was pretty clear in my reply above that I was talking about algorithms i.e. software. Well, some algorithms are implemented in hardware, but that's probably not the case here.

Lee was talking about the software... so was I.

I'm pretty sure he's talking about the 3D tracking algorithms used inside Natal itself for tracking the person in front of them and being able to do things like predict movement, thats unrelated to how that translates to a rigged 3D avatar.   He's discussing Natal, not a specific application.  Or do you think every game is going to be using the exact same character rigging?

We clearly have a different view of how that demo worked. Read this interview, specifically the two quotes I put after:

http://www.eurogamer.net/articles/e3-post-natal-discussion-interview

So we have a custom chip that we put in the sensor itself. The chip we designed with Microsoft will be doing the majority of the processing for you, so as a game designer you can think about the sensor as a normal input device - something that's relatively free for you as a game designer.

Designers have 100 per cent of the resources of the console and this device is just another input device they can use. It's a fancy, cool, awesome device, but essentially you can just treat it from a free-to-platform perspective, because all of the magic - all of the processing - happens sensor-side.

Essentially we do a 3D body scan of you. We graph 48 joints in your body and then those 48 joints are tracked in real-time, at 30 frames per second. So several for your head, shoulders, elbows, hands, feet...

It seems logical to conclude that the avatar demo was simply querying for those points and rendering the avatar assuming those points are correct. The avatar demo should just be taking the data and rendering it on screen, not doing any fancy processing according to these quotes.

 



My Mario Kart Wii friend code: 2707-1866-0957