By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Soleron said:

Basically, multi-GPU setups can produce more frames, but they'll come out like this: 10ms, 10ms, 10ms, 50ms, 10ms, 10ms ...

And that gives a worse image quality to the viewer than the following: 12ms, 12ms, 12ms, 12ms,  12ms, 12ms. Even though the first card has higher FPS, the second card will look better to a human.

So the 99% measure shows in how long you can expect 99% of frames to be rendered, rather than the average. For smooth framerates, that comes out to the same thing as raw FPS. But if they're more like the first situation I said, it's a better measure of the experience.

Video of actual cards: http://techreport.com/review/24051/geforce-versus-radeon-captured-on-high-speed-video

Full explanation (long): http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking

More testing on it: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Tes-12


Wow, that was a complete answer and the links and explanation was great. I never tought about latency problems, but it looks like they are actually an issue. I'm not surprised that NVidia GPUs perform better on that than AMD ones, since the NVidia driver is much more mature (they actually share more than 80% of the code between Windows, Mac and Linux version, minimizing maintenance efforts, allowing a closer performance between each OS).