By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Zkuq said:
FunFan said:

It seems so, but 32-bit has been the standar for a long time, so its probably strongly established in devs mindset. Only time will tell if theres any advantage to 16-bit flops, if developers even take their time to experiment with it.

I don't think it's just that we've been using FP32 for so long. To give you an idea about how imprecise FP16 is: the maximum integer it can accurately represent is 65504. It gets worse with decimal numbers: you can't even represent anything larger than 1024 as a decimal number anymore because FP16 simply lacks the precision for that. Obviously this gets worse when you perform operations with them. Wikipedia has some more details on the precision of FP16. This is actually a good reminder for me that the further away you move from 0 with FP numbers, the less precision you have.

I already know that, and I'm still stickinf to what I said: "Only time will tell if theres any advantage to 16-bit flops, if developers even take their time to experiment with it."



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)