By using this site, you agree to our Privacy Policy and our Terms of Use. Close
haxxiy said:
sc94597 said:

I personally don't mind Nvidia marketing based on results rather than rasterized work-loads. Going forward we're going to see more and more NeRF assisted graphics. I wouldn't be surprised if by the end of this decade rasterization will become a very small part of game graphics. 

You end up with apples vs. oranges comparisons however: more latency and residual artifacts in the case of DLSS4 vs. DLSS3, or lower accuracy and stability in the case of FP4 vs. FP8 for deep learning.

I'd like to see something like ultra-performance DLSS3 vs. performance DLSS4 side to side instead since you start with a more or less similar number of rasterized pixels.

For models that work fine with FP4 precision (or a Mixture of Formats Quantization) it is a real performance increase though. Deep Learning isn't scientific computing/simulation/engineering* where precision is always critical. There are many use-cases where saving on floating point precision can give good, accurate results, while performance (speed and/or accuracy) would increase significantly by the fact of being able to scale higher in terms of parameter count than other-wise (which is one of the scaling laws of transformers.) Often a quantized higher parameter model performs better than a non-quantized lower parameter one. If Nvidia is able to take advantage of this for NeRF assisted workloads (which they have been already), then it also means better performance in games with better results than otherwise. 

What this all does is make what was purely a hardware problem (Moore's law decelerating) into a hybrid hardware-software problem, where building new DL models can assist with the graphics pipeline. Rasterized performance becomes less useful of a metric over time in this timeline, because more of the compute is spent on non-rasterized workloads that aid in the final result. 

Since these DL models are improving at a faster rate than hardware the end result is that we get much better results for less money than if these companies poured all of their resources into solving the hardware problem alone (assuming there is no hardware paradigm shift on the immediate horizon, like graphene or photonics.) 


*except where DL models are used for those, when it is. 

Edit: This is probably also why Nvidia moved to ViT's (Vision Transformers) from CNN's. They scale much better by parameter count and data-set size, allowing Nvidia to better utilize the tensor cores, which have been somewhat under-utilized for gaming so far. 

Last edited by sc94597 - 1 day ago