Adinnieken said:
trasharmdsister12 said:
Adinnieken said:
Except that Microsoft already has stated that it would have been impossible to fit that amount of data down.
The issue isn't the skeletal tracking, the issue is the amount of data that streamed video represents that would be sent down to the console. While that isn't true of every situation, you have to understand that in some circumstances, full RGB, as well as IR, and tracking data are all coming down. If you don't need anything but the tracking data you can improve the fidelity and resolution, and despite what people say, Microsoft is doing that.
They have demonstrated finger tracking in Kinect Fun Labs, and an upcoming game will also be featuring it as well. They have announced, and Fable: Journey is demonstrating, a higher fidelity motion tracking which should be available this Autumn.
One of the complaints originally with Video Kinect, for example, was the fact that the resolution compared to the Xbox Vision camera was much worse. The reason being is because of the limited bandwidth, the fact that Video Kinect employed tracking (both limb and head), and needed to stream the RGB video feed.
So no, the second pre-processor would have not achieved any added value. As it is right now, Kinect uses less than 1% of the processing power of 1 core on the Xbox 360. Had there been a bigger pipe to work with, (i.e. USB3), we might have a different argument in which I might agree with you.
|
I'm agreeing with you that transferring the full res image data from the sensor to the 360 through USB is impossible. What I'm saying is IF there had been onboard processing within the Kinect then there wouldn't be need for that amount of data transfer to the box.
No doubt that MS has done work to improve the tracking algorithms, but what I'm say is if there was some form of built in processing in the Kinect then full resolution images could have been used straight from the RGB sensors (instead of the subset resolution they're using right now) for processing right on the Kinect sensor (no need to even transfer the imaging data to the box - solving the data throughput problem posed by USB 2.0). This would give more data to the algorithms (or at least some form of pre-processing algorithms to condensethe data), improving tracking without any change to the software. What is then sent to the box is the processed data (be it simple interpretations of data - gestures, simple skeletal data, or something else that isn't as large as the full images that have already been processed but keeps the full scope of what it's representing) to be used by the game developers which can be accessed through MS's Kinect development API's just as they are right now.
The improvements they're making are all from an algorithmic point of view. It's the algorithms they're running on the retrieved images and IR data that are making sensing more accurate. An analogy would be fuel efficiency of a car. The better the fuel quality (the software), the more efficiently it will burn and the higher fuel efficiency your car will wield. Alternatively, you could build a more fuel efficient engine and a lighter car (the hardware) to improve fuel efficiency which is what I'm talking about.
You completely missed what I was saying with my initial post. I'm also not knocking nor praising their design decisions. I pointed out both the good and the bad of it and am simply using information to allow others to judge the situation for themselves.
|
I'm going to explain this differently:
RGB camera: Used for providing RGB video facial recognition, and mapping RGB images to IR 3D maps (faces, objects). The RGB camera is capable of 640x480 resolution but was reduced to 320x240 for bandwidth purposes. It has no direct impact on tracking.
IR camera: Used for providing depth and tracking information. This information is preprocessed on the Kinect and sent down for additional processing. The IR camera is capable of 320x240.
Microphone array: Used for voice recognition. This information is sent down for additional processing when activated.
The data from the IR camera is sent down to the Xbox 360 all the time, unless it isn't being used by the game or application. The RGB camera data is only sent down in certain circumstances, however in those cases where it is sent down there has to be the bandwidth not only for all the data from Kinect, but on the entire console.
Additional processing on the Kinect would not have helped unless a higher resolution IR camera was used.
|