By using this site, you agree to our Privacy Policy and our Terms of Use. Close
bonzobanana said:

Real world data is all that matters. You compare the CPU requirements for upscaling from 720p to lets say 1080p or 1440p with rending at the lower resolution and not upscaling to see the CPU difference and how it effects frame rates.

Where is this "real world data"? Where are the benchmarks showing higher CPU utilization for say DLSS 720p -> 1080p than say bilinear 720p -> 1080p?  So far you haven't provided it. DLSS upscaling does have a performance penalty (usually very minor for high-end GPU's) compared to a brute-force upscale method, but that isn't because the CPU is burdened. It's because the tensor cores are being utilized. 

And no, we don't need to just trust Nvidia on this. Convolution Neural Networks (and Vision Transformers - ViT) inference isn't some hidden knowledge. It's well known that after the pre-processing steps (i.e loading the model into VRAM, which happens once and not per frame) the GPU takes over the inference.

In the case of DLSS -- what that looks like is the CUDA cores calculate the motion vectors, color data, and other standard rendering passes (performance is saved because the internal pixel count is lower than would otherwise be necessary) -> the tensor cores, using a pre-trained CNN (or ViT) model, take the rendered frame-data and perform the various convolution, pooling, relu passes (which are just various types of matrix multiplications) -> CUDA cores probably do a bit more post-inference work -> image is sent to the display. 

Where in this pipeline is the CPU doing anything?