By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Because all engines are so different it would be very difficult to come up with a fair metric.

Take shadows. It's really impossible to measure shadow fidelity unless it's obviously bad looking, or the devs let us in on it, because shadowmapping is a texture space operation. Because the shadows are textured across geometry at angles typically not the same as the viewers, you really can't pixel-count shit.

I just think of the effects available... What about depth of field? Some devs might being using 4 depth samples, others 8. Some games I have seen seem to be using a nasty block filter for DOF, others must be using a nice gaussian filter (so hard to tell).

Another example I can think of is luminance extraction for dynamic exposure. I can guarentee you that the initial luminance extraction of the scene is done at half resolution in 99% of games with HDR implementations. Why? It saves a ton of operations, and is more than good enough as the start towards calculating the average luminance.

Could you imagine people screaming "X game only calculating scene luminace at 360p!!!? It's nuts.

Now geometry is measured for fidelity because well, it's an easy target really. Engines are all about making tradeoffs to get what you want. If you wan't to make a world with high res sunlit shadows, big viewdistance, realtime reflections then why should you have to do that at 720p? You don't have too, so the devs don't.