NJ5 said:
We clearly have a different view of how that demo worked. Read this interview, specifically the two quotes I put after: http://www.eurogamer.net/articles/e3-post-natal-discussion-interview So we have a custom chip that we put in the sensor itself. The chip we designed with Microsoft will be doing the majority of the processing for you, so as a game designer you can think about the sensor as a normal input device - something that's relatively free for you as a game designer.
Designers have 100 per cent of the resources of the console and this device is just another input device they can use. It's a fancy, cool, awesome device, but essentially you can just treat it from a free-to-platform perspective, because all of the magic - all of the processing - happens sensor-side. Essentially we do a 3D body scan of you. We graph 48 joints in your body and then those 48 joints are tracked in real-time, at 30 frames per second. So several for your head, shoulders, elbows, hands, feet...
It seems logical to conclude that the avatar demo was simply querying for those points and rendering the avatar assuming those points are correct. The avatar demo should just be taking the data and rendering it on screen, not doing any fancy processing according to these quotes.
|
Yes EXCEPT that if you don't take into account joint restrictions there is nothing that isn't telling the avatar that his joint can't move 360 degrees and end up back at the same spot, only twisted. It's not so much a question of is the joint in the right spot, it's how it arrived at that spot.
I'm not saying the system doesn't need work, and from all account it's still over a year from launch, but I am saying you can't say the system is a failure or won't work just because the avatar was poorly rigged.







