Adinnieken said:
TCP/IP ensures packets are correctly received. Hashing is used to ensure that the content requested is the same as the content sent (a checksum). It is not error correction. You didn't watch the video or listen to it. It's sent back down as a video stream. The video stream is then processed on the local system. The point of the video was to demonstrate the quality based on the latency. Even the at 1000ms latency (30FPS) it actually offered decent quality. That said, the highest average latency in the US is 70ms. While the average lowest is 12ms. Theorethically one could do lighting rendering with 100ms latency and still offer a high quality image. Especially if, as with the Xbox One, you have a separate video stream processor that allows you to interpolate frames to increase the frame rate. |
For CPU loads computation is done in batch jobs, which is the route microsoft will take, processing and outputting video puts too much strain on the network infrastructure to be used effectively outside of lab experimentation, developing engines to work with lightmap muxing and processing that live video stream uses roughly 50% of the power saved by having an external server render it for you, than just rendering it locally, but again these are lab tests, in the real world gaming environments change vastly, when you include the unpredictable nature of online worlds or multiplayer servers, processing light externally with any degree of latency will result in very choppy lighting solutions as environments dramatically change before the data has had time to be processed and received.
Additionally, the latency involved with sending a job back and forth then processing the result locally, is barely any different than having to process a video stream, the only difference is youre saving a fraction of time and bandwidth in exchange for sacrificing overall image quality, as compression of the video stream results in loss of clarity.
Of course, you could stream uncompressed video but then you're looking at insane latency and bandwidth usage.
Microsoft of course will say all of this is possible, and indeed it is, because they want the feature to be a good talking point, regardless of if actually implamenting it makes any sort of sense, when the reality is, the cloud is going to be used as a glorified perpetual storage system for profile and npc data.
Of course, cloud processing of video will end up getting used, eventually, but it's going to be the situation of "1080p60" on ps3 and 360, where barely anything actually uses it, but because a couple do, people act like it's a super important thing.
Turn 10 are basically using cloud as a perpetual storage system, with a players profile data being randomly plucked from the cloud by the client machine when joining a race, the result and overall progress of that race is then uploaded and added to that profile data, ready to be used by the next client, or the profile owner when they next sign in.
Of course, if you can point to a single instance of a game on the Xbox One that will be using remote defered computation, I would be more than happy to debate this further, but as of right now you are cherry picking alpha development demonstration concepts from a manufacturer that has nothing to do with either console and trying to apply it to the xbox one, and it just isnt working.







