Michelasso said:
Let's be serious, we argue about the GDDR5's 176GB/s vs eSRAM's 204GB/s bandwidths and then some expect computations sent through a channel with a 500KB/s-100MB/s badwidth and a latency of 10-300ms to improve the frame rate? The demo MS showed was made on a PC connected to a computer farm via either Ethernet or fiber optics, where the latency is negligible for that purpose and the bandwidth is at least 1Gbps. It's an interesting use of an old technology and it isn't much different from the Uncharted 2 cutscenes rendered on a farm of PS3s. |
You're comparing two completely different types of computational processes. Memory calculations happen extremely quickly on local hardware. That information would NOT be sent across the wire (in the same way today, that information is NOT sent over a HDMI cable attached to your television). Local computations are run on the hardware, once processed, they are passed to the output medium. Example:
Code (written by developers) > Calculations (depending on variables such as player movement, calculations are required) > Processor (the calcuations are processed) > Output (the result of these calcuations is passed to your screen).
Neither the code, nor the calcuations ever reach the output. They are processed by the hardware and forwarded to the output device.
It would work exactly the same with a server. The server processes the calculations required for physics/light/AI etc and then the output is passed to the end user via the network (internet). The only difference between the technology used by game streaming and Crackdown's tech is that instead of passing the raw output (a video stream), the servers are passing a partially computed result to the end user device for futher processing.
With regards to 'can this tech improve frame rate?', of course it can. It cannot be used to improve the frame rate of existing games because they simply aren't designed with this technology in mind. New games however can take advantage of this technology. Crackdown is the perfect example. If it wasn't for the power of the servers running the physics calcuations the frame rate on the Xbox would grind to a hault. Instead, it is able to run more fluidly as some of the calcuations are offloaded to a seperate device (servers). Phyics is just one area this technology can be used. As other companies have demonstrated (including NVIDIA) it can be used to calcuate some lighting informaiton, physics, AI etc.
The real question is not CAN it be done, but WILL it be done. Most developers simply will not entertain the idea of using cloud computing for a long time due to:
- Development budget - it's new tech, it will cost more to learn and develop for.
- Audience - not everyone has a good enough internet connection and currently, Microsoft is the only platform holder with this technology so what 3rd party in their right mind would limit themselves?
- Risk - It's new, the tech is uncertain, it's capabilities are uncertain, publishers are never keen to take risks when they already have a working formula.









