By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft - Kinect: A Broken Promise

 

Do you still play Kinect?

Yes 71 33.18%
 
No 143 66.82%
 
Total:214
Adinnieken said:
fillet said:
Nah you guys analyze too much.

The problem with Kinect is pure and simply down to response time in the sensor, it's totally useless for anything competetive and hence you get "loose" party games and dull racers and what not.

Got nothing to do with the lazy developers, it's simply NOT POSSIBLE to make good games for it, other than those odd wacky abstract child of eden etc, but they would soon wear thin if you had 10 of them on the shelves.

Blame Microsoft, Kinect is an abomination and a disgrace and basically useless.

Move on the other hand is something quite good.


The challenge for Kinect isn't the sensor at all.  It's the pipeline.  The USB 2 connection on the Xbox 360 doesn't offer enough bandwidth to push down as much data as the Kinect is capable of providing.  In the end, it means figuring out ways to get more data down a limited pipeline, which they have been doing.

This was why Microsoft removing the additional processor was BS, because even if the pre-processor was still there, the bandwidth to send the data down the pipeline was still limited.  So it wouldn't have improved anything short-term or long-term.

If you've read anything about Fable: The Journey you'd have read about how there is a negligible delay in response with this game, as compared to Star Wars Kinect.  Something, had you read the recent interview with Kudo Tsunoda, you would have known that Microsoft was working on to improve (response time).  More importantly, what this means is that when Kinect is mated with the next generation console on USB 3, it'll be able to push down higher quality data (better resolution) and because of all of the work Microsoft has done to improve the response time, still provide a high quality gaming experience if not better than the one which will be available this Autumn. 

 


I would love to believe you, but even on the dashboard kinect is laggy. There's no excuse for it, something like Kinect simply demands <50ms response times.

That stuff about USB2.0 is simply not true, I'd be interested to read where you read it.

I assure you that nowhere near 30mbs of data is being transferred with Kinect, which is the practical bandwidth of USB2.0. That is definately not the cause for the bad response time, it's cheap electronics, pure and simple.

The camera in the Kinect will be picking up differential data of certain points determined to be relevent on someone's body between each frame of captured. That certainly does not saturate the USB2.0 bus, don't believe everything you read! I mean the camera's inside are only something like 640x480 and I assume there's 2-3, the number of data points is ridiculously low on a very low resolution camera that isn't even capable of anything more specific than the size of a hand (can't do finger recognition). To seriously claim that USB2.0 is the problem....well let's just say flying pigs are more likely. Believe me I'm not kidding when I say I would like to believe you, I got pulled along on the Kinect train and own one but it's next to worthless in the real world.

With all due respect, it's the first time I've even heard the USB2.0 bus being the cause for the poor response time.

 

Finally, they've been saying that Kinect response time will improve for about a year or just over now. The latest big game, Kinect Star Wars certainly didn't show that to be true in any way.

One last thing, they won't be improving the resolution of the data received as that person claims, serious BS going on there. 640x480 cameras at the distance Kinect is set to work at won't recognise anything smaller than a hang. It's simply not possible as the resolution just isn't there. Try setting your webcam to 640x480 and stand 12 feet away (Kinect has to work at this distance at least)

 

...You see what I'm talking about? Yes you can SEE your fingers but, considering environmental factors that have to be taken into account and error correction (let's face it Kinect goes mental now and misses lots of stuff already) do you really think a motion sensing device with a 640x480 resolution camera like Kinect will become anymore accurate with a SOFTWARE update? Not a chance.

That's akin to believing those "image enhancment" scenes on films like the Bourne trilogy where a blurry photo is "enhanced" to proved a higher resolution.

Not possible!



Around the Network
trasharmdsister12 said:

Adinnieken said:

This was why Microsoft removing the additional processor was BS, because even if the pre-processor was still there, the bandwidth to send the data down the pipeline was still limited.  So it wouldn't have improved anything short-term or long-term.

If the built in processor was there and was designed to analyze the images and send back analysis data (or some form of broken down skeletal data) rather than images then it would have improved the tracking capabilities (accuracy, speed). Kinect would then be able to use full resolution images from its camera in the internal analysis while also overcoming the data throughput limit posed by the USB 2.0 interface in real-time (full images are likely larger than simple text/integer data streams of critical points of the human body). The developer would then simply use this data in correspondance to Microsoft's Kinect development APIs to perform the required action.

Of course, this would also drive up the cost of designing and manufacturing the sensor and it probably would've screwed up Kinect's market penetration strategy; not to mention the heat and power draw that the processor would create causing a greater chance of fault in the design as well as all Kinect's requiring a separate power adapter. It was a design trade off and Microsoft opted to go with a more marketable and convenient device than a more accurate one. Whether that's a good or bad decision depends on the person.

@OP I just got my Kinect a month and a bit ago and I haven't had much time to play any games (on any platform) in recent weeks but I do try to jump into some Kinect action when I get the spare half hour just to loosen up. I'm also starting to read up on open drivers and APIs on it and am hoping to make something of some of my quirky ideas for Kinect use. So... I answered Yes to the poll.


Except that Microsoft already has stated that it would have been impossible to fit that amount of data down.

The issue isn't the skeletal tracking, the issue is the amount of data that streamed video represents that would be sent down to the console.  While that isn't true of every situation, you have to understand that in some circumstances, full RGB, as well as IR, and tracking data are all coming down.  If you don't need anything but the tracking data you can improve the fidelity and resolution, and despite what people say, Microsoft is doing that.

They have demonstrated finger tracking in Kinect Fun Labs, and an upcoming game will also be featuring it as well.  They have announced, and Fable: Journey is demonstrating, a higher fidelity motion tracking which should be available this Autumn.

One of the complaints originally with Video Kinect, for example, was the fact that the resolution compared to the Xbox Vision camera was much worse.  The reason being is because of the limited bandwidth, the fact that Video Kinect employed tracking (both limb and head), and needed to stream the RGB video feed.

So no, the second pre-processor would have not achieved any added value.  As it is right now, Kinect uses less than 1% of the processing power of 1 core on the Xbox 360.  Had there been a bigger pipe to work with, (i.e. USB3), we might have a different argument in which I might agree with you.



fillet said:
Adinnieken said:
fillet said:
Nah you guys analyze too much.

The problem with Kinect is pure and simply down to response time in the sensor, it's totally useless for anything competetive and hence you get "loose" party games and dull racers and what not.

Got nothing to do with the lazy developers, it's simply NOT POSSIBLE to make good games for it, other than those odd wacky abstract child of eden etc, but they would soon wear thin if you had 10 of them on the shelves.

Blame Microsoft, Kinect is an abomination and a disgrace and basically useless.

Move on the other hand is something quite good.


The challenge for Kinect isn't the sensor at all.  It's the pipeline.  The USB 2 connection on the Xbox 360 doesn't offer enough bandwidth to push down as much data as the Kinect is capable of providing.  In the end, it means figuring out ways to get more data down a limited pipeline, which they have been doing.

This was why Microsoft removing the additional processor was BS, because even if the pre-processor was still there, the bandwidth to send the data down the pipeline was still limited.  So it wouldn't have improved anything short-term or long-term.

If you've read anything about Fable: The Journey you'd have read about how there is a negligible delay in response with this game, as compared to Star Wars Kinect.  Something, had you read the recent interview with Kudo Tsunoda, you would have known that Microsoft was working on to improve (response time).  More importantly, what this means is that when Kinect is mated with the next generation console on USB 3, it'll be able to push down higher quality data (better resolution) and because of all of the work Microsoft has done to improve the response time, still provide a high quality gaming experience if not better than the one which will be available this Autumn. 

 


I would love to believe you, but even on the dashboard kinect is laggy. There's no excuse for it, something like Kinect simply demands <50ms response times.

That stuff about USB2.0 is simply not true, I'd be interested to read where you read it.

I assure you that nowhere near 30mbs of data is being transferred with Kinect, which is the practical bandwidth of USB2.0. That is definately not the cause for the bad response time, it's cheap electronics, pure and simple.

The camera in the Kinect will be picking up differential data of certain points determined to be relevent on someone's body between each frame of captured. That certainly does not saturate the USB2.0 bus, don't believe everything you read! I mean the camera's inside are only something like 640x480 and I assume there's 2-3, the number of data points is ridiculously low on a very low resolution camera that isn't even capable of anything more specific than the size of a hand (can't do finger recognition). To seriously claim that USB2.0 is the problem....well let's just say flying pigs are more likely. Believe me I'm not kidding when I say I would like to believe you, I got pulled along on the Kinect train and own one but it's next to worthless in the real world.

With all due respect, it's the first time I've even heard the USB2.0 bus being the cause for the poor response time.

 

Finally, they've been saying that Kinect response time will improve for about a year or just over now. The latest big game, Kinect Star Wars certainly didn't show that to be true in any way.

One last thing, they won't be improving the resolution of the data received as that person claims, serious BS going on there. 640x480 cameras at the distance Kinect is set to work at won't recognise anything smaller than a hang. It's simply not possible as the resolution just isn't there. Try setting your webcam to 640x480 and stand 12 feet away (Kinect has to work at this distance at least)

 

...You see what I'm talking about? Yes you can SEE your fingers but, considering environmental factors that have to be taken into account and error correction (let's face it Kinect goes mental now and misses lots of stuff already) do you really think a motion sensing device with a 640x480 resolution camera like Kinect will become anymore accurate with a SOFTWARE update? Not a chance.

That's akin to believing those "image enhancment" scenes on films like the Bourne trilogy where a blurry photo is "enhanced" to proved a higher resolution.

Not possible!

Despite the RGB camera resolution being 640x480, the resolution used for tracking is 320x240.  Again, tracking is one thing, streaming video is another. 

USB2 technically has enough bandwidth, but the problem resides in the architecture of the Xbox 360 and the fact that there is no dedicated hub for Kinect.  Thus, there is no guarantee that in a person's particular circumstance, that they would have all of the available bandwidth necessary because of what else may be connected to the console at the same time.  It is much easier to simply say that there isn't enough bandwidth through USB2 than to try to explain the specific details of the issue.

In order to ensure that under all circumstances the Xbox 360 was could support the incoming data in a timely manner, the resolution for the RGB camera was dropped to 320x240.  The IR camera is already at 320x240.  Again, tracking has little to do with the bandwidth issue, as what comes from the Kinect is data.  It's the video stream from the RGB camera that's the issue.

Again, finger tracking has already been demonstrated to work with the existing Xbox 360 and the existing Kinect through improvements to Kinect so far.  Whether you want to believe it, does make one bit of difference to me, it's done.  If you have an Xbox 360 and you have Kinect, just fired up Kinect Fun Labs.  The finger tracking capabilities is featured there.

Also, Kinect does not work 12 feet away.  The ideal range is 6-10'.   At 12' it won't work at all. 

Kinect is improved through software updates.  It has been from day one.  Anyone involved in the Beta for Kinect can tell you about improvements that were sent down and the impact on tracking.  And yes, new or refined algorithms can improve tracking.  The reports of improvements seen in Fable: Journey aren't from Microsoft, but from third-party, independent journalists. 

So while your opinion is appreciated, you clearly do not know what you're talking about. 



trasharmdsister12 said:
Adinnieken said:

Except that Microsoft already has stated that it would have been impossible to fit that amount of data down.

The issue isn't the skeletal tracking, the issue is the amount of data that streamed video represents that would be sent down to the console.  While that isn't true of every situation, you have to understand that in some circumstances, full RGB, as well as IR, and tracking data are all coming down.  If you don't need anything but the tracking data you can improve the fidelity and resolution, and despite what people say, Microsoft is doing that.

They have demonstrated finger tracking in Kinect Fun Labs, and an upcoming game will also be featuring it as well.  They have announced, and Fable: Journey is demonstrating, a higher fidelity motion tracking which should be available this Autumn.

One of the complaints originally with Video Kinect, for example, was the fact that the resolution compared to the Xbox Vision camera was much worse.  The reason being is because of the limited bandwidth, the fact that Video Kinect employed tracking (both limb and head), and needed to stream the RGB video feed.

So no, the second pre-processor would have not achieved any added value.  As it is right now, Kinect uses less than 1% of the processing power of 1 core on the Xbox 360.  Had there been a bigger pipe to work with, (i.e. USB3), we might have a different argument in which I might agree with you.

I'm agreeing with you that transferring the full res image data from the sensor to the 360 through USB is impossible. What I'm saying is IF there had been onboard processing within the Kinect then there wouldn't be need for that amount of data transfer to the box.

No doubt that MS has done work to improve the tracking algorithms, but what I'm say is if there was some form of built in processing in the Kinect then full resolution images could have been used straight from the RGB sensors (instead of the subset resolution they're using right now) for processing right on the Kinect sensor (no need to even transfer the imaging data to the box - solving the data throughput problem posed by USB 2.0). This would give more data to the algorithms (or at least some form of pre-processing algorithms to condensethe data), improving tracking without any change to the software. What is then sent to the box is the processed data (be it simple interpretations of data - gestures, simple skeletal data, or something else that isn't as large as the full images that have already been processed but keeps the full scope of what it's representing) to be used by the game developers which can be accessed through MS's Kinect development API's just as they are right now.

 The improvements they're making are all from an algorithmic point of view. It's the algorithms they're running on the retrieved images and IR data that are making sensing more accurate. An analogy would be fuel efficiency of a car. The better the fuel quality (the software), the more efficiently it will burn and the higher fuel efficiency your car will wield. Alternatively, you could build a more fuel efficient engine and a lighter car (the hardware) to improve fuel efficiency which is what I'm talking about.

You completely missed what I was saying with my initial post. I'm also not knocking nor praising their design decisions. I pointed out both the good and the bad of it and am simply using information to allow others to judge the situation for themselves.

I'm going to explain this differently:

RGB camera:  Used for providing RGB video facial recognition, and mapping RGB images to IR 3D maps (faces, objects).  The RGB camera is capable of 640x480 resolution but was reduced to 320x240 for bandwidth purposes.  It has no direct impact on tracking.

IR camera: Used for providing depth and tracking information.  This information is preprocessed on the Kinect and sent down for additional processing.  The IR camera is capable of 320x240.

Microphone array: Used for voice recognition.  This information is sent down for additional processing when activated.

The data from the IR camera is sent down to the Xbox 360 all the time, unless it isn't being used by the game or application.  The RGB camera data is only sent down in certain circumstances, however in those cases where it is sent down there has to be the bandwidth not only for all the data from Kinect, but on the entire console.

Additional processing on the Kinect would not have helped unless a higher resolution IR camera was used.



Adinnieken said:
fillet said:
Adinnieken said:
fillet said:
Nah you guys analyze too much.

The problem with Kinect is pure and simply down to response time in the sensor, it's totally useless for anything competetive and hence you get "loose" party games and dull racers and what not.

Got nothing to do with the lazy developers, it's simply NOT POSSIBLE to make good games for it, other than those odd wacky abstract child of eden etc, but they would soon wear thin if you had 10 of them on the shelves.

Blame Microsoft, Kinect is an abomination and a disgrace and basically useless.

Move on the other hand is something quite good.


The challenge for Kinect isn't the sensor at all.  It's the pipeline.  The USB 2 connection on the Xbox 360 doesn't offer enough bandwidth to push down as much data as the Kinect is capable of providing.  In the end, it means figuring out ways to get more data down a limited pipeline, which they have been doing.

This was why Microsoft removing the additional processor was BS, because even if the pre-processor was still there, the bandwidth to send the data down the pipeline was still limited.  So it wouldn't have improved anything short-term or long-term.

If you've read anything about Fable: The Journey you'd have read about how there is a negligible delay in response with this game, as compared to Star Wars Kinect.  Something, had you read the recent interview with Kudo Tsunoda, you would have known that Microsoft was working on to improve (response time).  More importantly, what this means is that when Kinect is mated with the next generation console on USB 3, it'll be able to push down higher quality data (better resolution) and because of all of the work Microsoft has done to improve the response time, still provide a high quality gaming experience if not better than the one which will be available this Autumn. 

 


I would love to believe you, but even on the dashboard kinect is laggy. There's no excuse for it, something like Kinect simply demands <50ms response times.

That stuff about USB2.0 is simply not true, I'd be interested to read where you read it.

I assure you that nowhere near 30mbs of data is being transferred with Kinect, which is the practical bandwidth of USB2.0. That is definately not the cause for the bad response time, it's cheap electronics, pure and simple.

The camera in the Kinect will be picking up differential data of certain points determined to be relevent on someone's body between each frame of captured. That certainly does not saturate the USB2.0 bus, don't believe everything you read! I mean the camera's inside are only something like 640x480 and I assume there's 2-3, the number of data points is ridiculously low on a very low resolution camera that isn't even capable of anything more specific than the size of a hand (can't do finger recognition). To seriously claim that USB2.0 is the problem....well let's just say flying pigs are more likely. Believe me I'm not kidding when I say I would like to believe you, I got pulled along on the Kinect train and own one but it's next to worthless in the real world.

With all due respect, it's the first time I've even heard the USB2.0 bus being the cause for the poor response time.

 

Finally, they've been saying that Kinect response time will improve for about a year or just over now. The latest big game, Kinect Star Wars certainly didn't show that to be true in any way.

One last thing, they won't be improving the resolution of the data received as that person claims, serious BS going on there. 640x480 cameras at the distance Kinect is set to work at won't recognise anything smaller than a hang. It's simply not possible as the resolution just isn't there. Try setting your webcam to 640x480 and stand 12 feet away (Kinect has to work at this distance at least)

 

...You see what I'm talking about? Yes you can SEE your fingers but, considering environmental factors that have to be taken into account and error correction (let's face it Kinect goes mental now and misses lots of stuff already) do you really think a motion sensing device with a 640x480 resolution camera like Kinect will become anymore accurate with a SOFTWARE update? Not a chance.

That's akin to believing those "image enhancment" scenes on films like the Bourne trilogy where a blurry photo is "enhanced" to proved a higher resolution.

Not possible!

Despite the RGB camera resolution being 640x480, the resolution used for tracking is 320x240.  Again, tracking is one thing, streaming video is another. 

USB2 technically has enough bandwidth, but the problem resides in the architecture of the Xbox 360 and the fact that there is no dedicated hub for Kinect.  Thus, there is no guarantee that in a person's particular circumstance, that they would have all of the available bandwidth necessary because of what else may be connected to the console at the same time.  It is much easier to simply say that there isn't enough bandwidth through USB2 than to try to explain the specific details of the issue.

In order to ensure that under all circumstances the Xbox 360 was could support the incoming data in a timely manner, the resolution for the RGB camera was dropped to 320x240.  The IR camera is already at 320x240.  Again, tracking has little to do with the bandwidth issue, as what comes from the Kinect is data.  It's the video stream from the RGB camera that's the issue.

Again, finger tracking has already been demonstrated to work with the existing Xbox 360 and the existing Kinect through improvements to Kinect so far.  Whether you want to believe it, does make one bit of difference to me, it's done.  If you have an Xbox 360 and you have Kinect, just fired up Kinect Fun Labs.  The finger tracking capabilities is featured there.

Also, Kinect does not work 12 feet away.  The ideal range is 6-10'.   At 12' it won't work at all. 

Kinect is improved through software updates.  It has been from day one.  Anyone involved in the Beta for Kinect can tell you about improvements that were sent down and the impact on tracking.  And yes, new or refined algorithms can improve tracking.  The reports of improvements seen in Fable: Journey aren't from Microsoft, but from third-party, independent journalists. 

So while your opinion is appreciated, you clearly do not know what you're talking about. 


Except the whole thing falls apart when you consider there are basic desktop webcams on USB2.0 interface that can record at 1920x1080 (well not the basic ones, but pretty cheap - genuine 1920x1080, not interpolated) at 30fps.

I think you will find that more than saturates the USB2.0 bus, so how is that possible? It's possible because processing is occuring in the camera, just like it's occuring in Kinect.

The fact they took away "one of the processors" doesn't mean that kinect is a fisher price piece of plastic with a few holes in the front. It was designed specifically for the Xbox 360 so it will have processing suitable to the bus,

Regardless, I have no facts to back up my claims but neither do you or anyone else declaring USB2.0 bandwidth to be the problem. So I may not know what I'm talking about as you say, but there are some things I do know and that is that there's no way on this earth that Kinect is streaming uncompressed video frames down the USB2.0 connection.

You only need a basic knowledge of computers and thoughts in that area to "just know" that's not going to be happening, you also don't need to be a rocket scientist to know that the comments from some of the developers on Kinect are clearly watered down for public consumption and they have got misinterpreted by the silly colourful - and incorrect - language reps use.



Around the Network
Adinnieken said:
trasharmdsister12 said:
Adinnieken said:

Except that Microsoft already has stated that it would have been impossible to fit that amount of data down.

The issue isn't the skeletal tracking, the issue is the amount of data that streamed video represents that would be sent down to the console.  While that isn't true of every situation, you have to understand that in some circumstances, full RGB, as well as IR, and tracking data are all coming down.  If you don't need anything but the tracking data you can improve the fidelity and resolution, and despite what people say, Microsoft is doing that.

They have demonstrated finger tracking in Kinect Fun Labs, and an upcoming game will also be featuring it as well.  They have announced, and Fable: Journey is demonstrating, a higher fidelity motion tracking which should be available this Autumn.

One of the complaints originally with Video Kinect, for example, was the fact that the resolution compared to the Xbox Vision camera was much worse.  The reason being is because of the limited bandwidth, the fact that Video Kinect employed tracking (both limb and head), and needed to stream the RGB video feed.

So no, the second pre-processor would have not achieved any added value.  As it is right now, Kinect uses less than 1% of the processing power of 1 core on the Xbox 360.  Had there been a bigger pipe to work with, (i.e. USB3), we might have a different argument in which I might agree with you.

I'm agreeing with you that transferring the full res image data from the sensor to the 360 through USB is impossible. What I'm saying is IF there had been onboard processing within the Kinect then there wouldn't be need for that amount of data transfer to the box.

No doubt that MS has done work to improve the tracking algorithms, but what I'm say is if there was some form of built in processing in the Kinect then full resolution images could have been used straight from the RGB sensors (instead of the subset resolution they're using right now) for processing right on the Kinect sensor (no need to even transfer the imaging data to the box - solving the data throughput problem posed by USB 2.0). This would give more data to the algorithms (or at least some form of pre-processing algorithms to condensethe data), improving tracking without any change to the software. What is then sent to the box is the processed data (be it simple interpretations of data - gestures, simple skeletal data, or something else that isn't as large as the full images that have already been processed but keeps the full scope of what it's representing) to be used by the game developers which can be accessed through MS's Kinect development API's just as they are right now.

 The improvements they're making are all from an algorithmic point of view. It's the algorithms they're running on the retrieved images and IR data that are making sensing more accurate. An analogy would be fuel efficiency of a car. The better the fuel quality (the software), the more efficiently it will burn and the higher fuel efficiency your car will wield. Alternatively, you could build a more fuel efficient engine and a lighter car (the hardware) to improve fuel efficiency which is what I'm talking about.

You completely missed what I was saying with my initial post. I'm also not knocking nor praising their design decisions. I pointed out both the good and the bad of it and am simply using information to allow others to judge the situation for themselves.

I'm going to explain this differently:

RGB camera:  Used for providing RGB video facial recognition, and mapping RGB images to IR 3D maps (faces, objects).  The RGB camera is capable of 640x480 resolution but was reduced to 320x240 for bandwidth purposes.  It has no direct impact on tracking.

IR camera: Used for providing depth and tracking information.  This information is preprocessed on the Kinect and sent down for additional processing.  The IR camera is capable of 320x240.

Microphone array: Used for voice recognition.  This information is sent down for additional processing when activated.

The data from the IR camera is sent down to the Xbox 360 all the time, unless it isn't being used by the game or application.  The RGB camera data is only sent down in certain circumstances, however in those cases where it is sent down there has to be the bandwidth not only for all the data from Kinect, but on the entire console.

Additional processing on the Kinect would not have helped unless a higher resolution IR camera was used.

What do you mean by this?



Also I would like to add...

Wikipedia states that resolution of both cams is 640x480 at 30hz,

so that's 60 images 640x480 uncompressed bitmap = 900kb x 60 = A lot more than 30mb second.

Does not compute.

It does not compute, because even basic webcams are capable of processing and no company would be stupid enough to create a device like Kinect that would send raw bitmap data down a USB2.0 connection, yet you state that Kinect can lower the RGB camera resolution to 320x240, how is this possible? If the cameras have no processing, how can they process the image digitally to then send that to the Xbox 360.

And, if they do have processing, which I say they do as theoretically not possible to stream 640x480 60fps down USB2.0 uncompressed, they will definately have processing algorithm to ignore redundant data (pretty much all image data collected)...

It's just one big red herring.

 

 

Edit.....

 

 

Hold on a minute, I know Wikipedia isn't ALWAYS right, but it states that the camera operates at 30fps...

If it was operating at that speed, then the response time would not be as bad as it is. I'm sure you're right about some of the things you're saying, but your explanation (although far superior to my sentence structure) although well worded, in terms of technical content doesn't really explain the answer to the question of why the USB2.0 bus is saturated.

Wikipedia states 1 camera records in greyscale the other colour, BUT they definately are both 30fps, how can this explain the average lag of around 200ms? Clearly USB2.0 has nothing to do with if Wikipedia is correct.



My issue with Kinect is-

Needs too much room.
Picks up words from my speakers when I use Youtube, BBC iPlayer etc. Very annoying sometimes.
Like the Wii I just can't be bothered to be waving my arms about. I'd much rather just sit back and chill with a control pad.
Killzone 2 esque lag.
Most of the games are shit.

Some of the issues I've listed won't apply to others such as space issues. However despite all the issues I have with it I hope Kinect 2 can solve all of them. As a poster earlier mentioned Kinect really needed to be an expensive add on to work properly rather than the cut backs MS had to make in order to sell it at a reasonable price.



fillet said:
Adinnieken said:
trasharmdsister12 said:
Adinnieken said:

Except that Microsoft already has stated that it would have been impossible to fit that amount of data down.

The issue isn't the skeletal tracking, the issue is the amount of data that streamed video represents that would be sent down to the console.  While that isn't true of every situation, you have to understand that in some circumstances, full RGB, as well as IR, and tracking data are all coming down.  If you don't need anything but the tracking data you can improve the fidelity and resolution, and despite what people say, Microsoft is doing that.

They have demonstrated finger tracking in Kinect Fun Labs, and an upcoming game will also be featuring it as well.  They have announced, and Fable: Journey is demonstrating, a higher fidelity motion tracking which should be available this Autumn.

One of the complaints originally with Video Kinect, for example, was the fact that the resolution compared to the Xbox Vision camera was much worse.  The reason being is because of the limited bandwidth, the fact that Video Kinect employed tracking (both limb and head), and needed to stream the RGB video feed.

So no, the second pre-processor would have not achieved any added value.  As it is right now, Kinect uses less than 1% of the processing power of 1 core on the Xbox 360.  Had there been a bigger pipe to work with, (i.e. USB3), we might have a different argument in which I might agree with you.

I'm agreeing with you that transferring the full res image data from the sensor to the 360 through USB is impossible. What I'm saying is IF there had been onboard processing within the Kinect then there wouldn't be need for that amount of data transfer to the box.

No doubt that MS has done work to improve the tracking algorithms, but what I'm say is if there was some form of built in processing in the Kinect then full resolution images could have been used straight from the RGB sensors (instead of the subset resolution they're using right now) for processing right on the Kinect sensor (no need to even transfer the imaging data to the box - solving the data throughput problem posed by USB 2.0). This would give more data to the algorithms (or at least some form of pre-processing algorithms to condensethe data), improving tracking without any change to the software. What is then sent to the box is the processed data (be it simple interpretations of data - gestures, simple skeletal data, or something else that isn't as large as the full images that have already been processed but keeps the full scope of what it's representing) to be used by the game developers which can be accessed through MS's Kinect development API's just as they are right now.

 The improvements they're making are all from an algorithmic point of view. It's the algorithms they're running on the retrieved images and IR data that are making sensing more accurate. An analogy would be fuel efficiency of a car. The better the fuel quality (the software), the more efficiently it will burn and the higher fuel efficiency your car will wield. Alternatively, you could build a more fuel efficient engine and a lighter car (the hardware) to improve fuel efficiency which is what I'm talking about.

You completely missed what I was saying with my initial post. I'm also not knocking nor praising their design decisions. I pointed out both the good and the bad of it and am simply using information to allow others to judge the situation for themselves.

I'm going to explain this differently:

RGB camera:  Used for providing RGB video facial recognition, and mapping RGB images to IR 3D maps (faces, objects).  The RGB camera is capable of 640x480 resolution but was reduced to 320x240 for bandwidth purposes.  It has no direct impact on tracking.

IR camera: Used for providing depth and tracking information.  This information is preprocessed on the Kinect and sent down for additional processing.  The IR camera is capable of 320x240.

Microphone array: Used for voice recognition.  This information is sent down for additional processing when activated.

The data from the IR camera is sent down to the Xbox 360 all the time, unless it isn't being used by the game or application.  The RGB camera data is only sent down in certain circumstances, however in those cases where it is sent down there has to be the bandwidth not only for all the data from Kinect, but on the entire console.

Additional processing on the Kinect would not have helped unless a higher resolution IR camera was used.

What do you mean by this?

As I tried to reexplain elsewhere, the issue is in the architecture of the console itself, not in the actual available bandwidth of USB2.

Saying the bandwidth is an issue is an easy explanation.  It requires a lot less time in explaining.  The real issue is in that an Xbox 360 may have multiple peripherals or thumb drives connected to the console, because these may require a high degree of bandwidth themselves, the console does not have the bandwidth available for full resolution RGB video.  So in order to compensate, and assure that the available bandwidth would be there regardless of the circumstances, they reduced the RGB resolution. 



trasharmdsister12 said:

Adinnieken said:

RGB camera:  Used for providing RGB video facial recognition, and mapping RGB images to IR 3D maps (faces, objects).  The RGB camera is capable of 640x480 resolution but was reduced to 320x240 for bandwidth purposes.  It has no direct impact on tracking.

Care to explain how it has no direct impact on tracking? I'm open to enhance my knowledge. If it's because the depth sensor is only 320x240 I deem that a flimsy explanation. Matching the resolution of the two data streams makes for simpler matching in input information between RGB and IR sensors but it is not a requirement for algorithm design. More data would help in tracking. A higher resolution provides more data. Thus, a higher resolution helps in tracking.

Finally, why include 2 RGB sensors if it has no hand in spatial tracking?

There is one RGB camera, and one IR camera.  The IR emitter (a laser that broadcasts thousands of dots across a room) and the IR camera which is able to detect those IR dots is used for 3D depth, and tracking.  The RGB camera is used for video (Video Kinect), facial recognition, and texture mapping of 3D objects (several Kinect Fun Labs apps use this).