By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fleischr said:
HoloDust said:

Sorry, what? Are we looking at the same picture? Because that image is Imagination Technologies showcase for their PowerVR GPUs that is showing superiority of FP32 over FP16 and degradation in image quality that latter produces. Just like this picture as well:

Anyway, FP16 has its place in mobile industry, but some people are having unrealistic expectations thinking that gains are 2x. IIRC, boosts from using FP16 code in real games are very modest, since most of the work is still done in FP32.

But as I said, I'm no expert on subject, best place for discussions like this is probably Beyond3D and there are threads out there dealing with this matter.

Are these the same number of flops for both images? As in say 1TF of in FP16 vs 1TF in FP32?

The fact it's listed simply as 'competing multicore GPU' and not anything else specfic comes across dubious.

It's not the matter of flops, it's the matter of presicion. Again:

"To get an idea of what a difference in precision 16 bits can make, FP16 can represent 1024 values for each power of 2 between 2-14 and 215 (its exponent range). That’s 30,720 values. Contrast this to FP32, which can represent about 8 million values for each power of 2 between 2-126 and 2127. That’s about 2 billion values—a big difference."

https://devblogs.nvidia.com/parallelforall/mixed-precision-programming-cuda-8/

Of course, all those pictures are Imagination's marketing...probably worst case scenarios... there's a reason why FP16 can and is used on mobiles - small screens. But blow that up on TV and you have a different story. I can't find them now, but I remember pics of HL2 running in FP16 and FP32...lot of artefacts in FP16.

Again, not expert on the matter, not by a long shot, but my understanding is that FP16 is usefull in some very limited cases and that performance gains from mixed FP32/FP16 code are quite modest...at least in games.