By using this site, you agree to our Privacy Policy and our Terms of Use. Close
JaggedSac said:
letsdance said:
JaggedSac said:

Well, Kodu stated in an interview they are going to be reducing the input lag before release the product.  The important thing to glean from this is that it will not get worse.  The input lag will only get better than what is in that video.  Their neural network and firmware will be improved.  The accuracy will be the same as the CMOS sensor is the same.  The only question is how close was that camera.


That was before this though.

http://www.gamesindustry.biz/articles/microsoft-drops-internal-natal-chip_1

I dont know how much of an effect it will have on it but I know people were talking up the chip in NATAL as a reason why the ps eye will never be as accurate ("the input is already processed by the time it gets to the console" or something along those lines)

That will only affect games that are developed for Natal, ie AI, physics, etc.  It will not affect accuracy in any capacity and the input lag will most likely be improved since the 360 CPU is more powerful than a stand alone chip in Natal would be.

The reason the chip was good was because developers could develop games as though Natal was just another controller.  They would not have to worry about it eating up resources.  Now they do.

Having the software on the 360 can be a good thing.  Specific games can have their own specific Natal software set that developers develop.  It is a much more open platform now.

That's not necessarily true.  A chip dedicated to processing the information that the camera captures could very well be better than using 15-30% of the XBox360 CPU power.  The chip would be designed for whatever needs to happen to process the data.  CPUs are generally good at everything and are designed to be useful in almost any circumstance.  A custom designed chip would probably be crap at doing most things, but excel at what it's supposed to do (process Natal data).

An easy example is having a GPU with an h.264 decoder vs one without.  If your GPU has an h.264 decoder, your CPU utilization when watching an HD video will probably be relatively light, even for weaker CPUs.  If your GPU doesn't have an h.264 decoder, well I hope you have a powerful CPU.  The NVIDIA Tegra can do 1080p h.264 decoding that an older desktop CPU would have a tough time with, but in terms of pure power, the older desktop CPU is probably more powerful than the Tegra.

EDIT:  On a side note, it was the cheaper, less powerful GPUs from NVIDIA that were the first to include an H.264 decoder while the more powerful GPUs went without one.