By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Mnementh said:
SvennoJ said:


McGuire said that to match the capabilities of human vision, future graphics systems need to be able to process 100,000 megapixels per second, up from the 450 megapixels per second they’re capable of today.


Doing so will help push the vastly higher display resolutions required and push the rendering latency down from current thresholds of about 20 milliseconds towards a goal of latency under one millisecond — approaching the maximum perceptive abilities of humans.

Any added lag from streaming or even a wireless connection would hamper that goal of reaching 1ms latency.

Hehe, that 1ms target puts a real physical limit on it. As light travels at 299 million meters per second, it means it travels around 300km per millisecond. This time is needed to transport the signal from the sensors to the render-machine and the rendered image back, so the machine can only be away half that distance: 150km. And that means, no time is lost on routing, processing, rendering. So I expect in best cases, that the server cannot be more away than 50 kilometers. So, how many servers would you need to achieve that for a country like the USA? I don't think any company can shoulder that amount of investments. So higher lag is to be expected.

The local hardware can finish or adjust the image like watching 360 VR videos. There's no lag when turning your head on those since it simply sends the whole 360 video. That's pretty ineffecient and can be improved by only sending the part you could possibly see by turning your head within the roundtrip lag. Add foveated rendering with enough room to cover how fast you can move your pupils and you can reduce the bandwidth a lot too.

The tricky part is compensating correctly for lateral head movements in 3D. Perhaps you won't really notice it or you can send depth information with the image for local just in time adjustments. That depth information can also be used for depth of field corrections for eye tracking. VR streaming will need some local processing to work, but the real work can still be done far away. Head movements are very noticeable in VR, yet pulling a trigger with a 30ms delay doesn't bother anyone.

The infrastructure needs to improve a lot first. I can hardly stream one 4k 360 video at a time, which looks blurry and only runs at 30fps. 4K 120 fps per eye, is really the minimum for good VR. With 1ms latency you're looking at 1000 fps, and the human eye can benefit from 9K to 18K per eye at the 150 degrees fov of the human eye. Which sounds like a lot, yet with foveated rendering that's doable.