And this is a pretty nice interview about the system. Give it a read.
http://www.next-gen.biz/features/interview-andrew-oliver
Motion tracking is never completely clean – how good is Kinect?
It's a very clever system that they've got. They look at the depth data, they work out fairly quickly if it's a human, and then they apply their algorithms to give you all the bones or whatever. And if you're standing up in front of it, it kind of works, and now with the new software libraries, if you're sitting down on the sofa, it works. OK, so one big thing that people were questioning was whether you could sit on the sofa. The new libraries work, but there are certain things, like in our fitness game, where you sit on the floor where it kind of gets confused. But the most expensive motion capture systems you can get out there, probably Vicon, it's like: you can break those as well, and that's why you employ clean up animators to go an fill in all the little gaps and stuff like that. So, we don't have the luxury of having that offline clean up ability, we have to do it live.
But then, what is it that you're doing live? For example, in Biggest Loser the skeleton doesn't work when you're lying on the floor, but what we had to do was say look at it in another way. They've given us the software library, and it can't cover all cases, but we can look at the silhouette and see that the player is currently doing a press up. You can actually see that their bum's lagging, and they're bending their back. Then I would need to do a software algorithm that kind of works that out. It's just a bit of image processing. So they've given you a generic piece, which is actually pretty impressive and covers most cases - certainly all the standing up, and now sitting down. If you want to go further than that, then do it yourself in software.










