By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fleischr said:
HoloDust said:

Not expert on subject, but few things to consider:

"To get an idea of what a difference in precision 16 bits can make, FP16 can represent 1024 values for each power of 2 between 2-14 and 215 (its exponent range). That’s 30,720 values. Contrast this to FP32, which can represent about 8 million values for each power of 2 between 2-126 and 2127. That’s about 2 billion values—a big difference."

https://devblogs.nvidia.com/parallelforall/mixed-precision-programming-cuda-8/ 

 

Current performance king in gaming GPUs, Titan X:

FP32: 10,157 GFLOPS

FP16: 159 GFLOPS

This is of course nVidia's way to prevent gaming cards being bought instead of Teslas for tasks that actually benefit from FP16...but just shows how little FP16 performance is important in games.

Honestly, not an expert on the subject, but last time I recall any talk about FP16 in gaming was some 15 or so years ago. Sure, mobiles have it, but degradation in quality seems to be quite noticeable.

Can you give an example? FunFan just gave one that seems to prove otherwise.

Sorry, what? Are we looking at the same picture? Because that image is Imagination Technologies showcase for their PowerVR GPUs that is showing superiority of FP32 over FP16 and degradation in image quality that latter produces. Just like this picture as well:

Anyway, FP16 has its place in mobile industry, but some people are having unrealistic expectations thinking that gains are 2x. IIRC, boosts from using FP16 code in real games are very modest, since most of the work is still done in FP32.

But as I said, I'm no expert on subject, best place for discussions like this is probably Beyond3D and there are threads out there dealing with this matter.