HoloDust said: "Irides also does this with a technique called “likelyhood-based foveation” that renders higher quality parts of an image and ‘gracefully degrades’ others, all based on where the user is likely to look. It also uses a spherical mesh for said image. In a sense, this sort of mimics the human eye, and offers the user lower latency and higher quality at the same time." This is interesting part. I remember back in 80s reading about certain, IIRC Air Force combat training simulator doing something similar - just rendering at higher quality what you're directly looking at. I made thread about it couple years back wondering if this is viable way to do with games and TVs, I think we came to conclusion TV might be too small for such approach, but VR seems like perfect candidate. |
It will be a breakthrough in rendering for VR. Besides the fact that you only need to render 2 degrees of your 100 degree fov at the highest resolution, more importantly it also opens up rendering depth correctly and fixing the convergence accomodation disparity problem inherent in all current sterographic display solutions.
http://vbn.aau.dk/ws/files/71812895/Paper.pdf
There have been some tests with Occulus rift yet eye tracking is not reliable and fast enough yet.
http://doc-ok.org/?p=1021
The most important result is this: the current system latency is too high for one of the more forward-looking applications of eye tracking, foveated rendering. I clearly saw the tracking circle lagging behind my eye movements, and I would have noticed reduced rendering resolutions and “pop” from the software switching between levels of detail just as clearly.
To give a ballpark estimate: assume that a foveated rendering system uses a full-resolution area of 9° around the current viewing direction (that’s very small compared to the Rift’s 100° field of view). Then, assuming a 900°/s saccading speed, the display system must have a total end-to-end latency, including eye tracking and display refresh, of less than 1/100s, or 10ms. If latency is higher, the user’s eyes will be able to “outrun” the full-resolution zone, and see not only a low-resolution render, but also a distinct “popping” effect when the display system catches up and renders the now-foveated part of the scene at full detail. Both of these effects are very annoying, as we have learned from our out-of-core, multi-resolution 3D point cloud and global topography viewers (which use a form of foveated rendering to display terabyte-sized data sets at VR frame rates on regular computers). As mentioned above, SMI’s eye tracked Rift prototype does not yet have low enough latency to make foveated rendering effective.
It's promising stuff anyway. When fast and reliable eye tracking becomes available, Foveated VR headset rendering will only cost a fraction of rendering a full 4K tv image, while giving the illusion of looking at a 4k 100 degree image field. (you still need 8K panels even though you only render low res on most of it)