By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ArnoldRimmer said:

That's great, yet completely meaningless.

There has never been any doubt that graphics rendering calculations can generally be outsourced to other computers. Raytracing clusters have done this for decades.

The problem is, and has always been, that this has very little practical implications for real-world games.

Because even if not considering the problem of network latency/bandwidth etc. of real-world internet connections, there's still the problem that the actual amount of calculations necessary does not decrease, it's just being distributed differently: If, in this setup, a high-end PC managed to show the scene at about 2fps, while "the cloud" managed to achieve 32fps, then that means that the cloud computing resources required for this demo equalled to at least 16 equivalent high-end PCs. These resources cost money, quite a lot of it, that someone has to pay for.

So there's simply a huge rift between theory and practice when it comes to the "power of the cloud" for graphics: Relevant improvements are very well possible in theory - but unrealistic in practice, because no gamer would currently be willing to pay for the resources required to actually do so: Too costly for very limited improvements.

You are partly right. The average quality of the game will not exceed that what you can get with local hardware.
However peak demand can be better absorbed in a server farm setup than on your local hardware. Not everyone plays at the same time and most games don't demand that much all the time. Plus in a multiplayer game, the world and physics only need to be done once for all the clients. So when you play a streamed game you can get about the same level of graphics as you would get locally, but instead of heavy explosions being restricted to cutscenese, they can now play out in real time.