shikamaru317 said:
AMD has their own version of DLSS in the works, as does Microsoft. Both of their versions brute force the machine learning workloads through 16, 8, and 4 bit integer operations, which use far less GPU resources than the 32 bit workloads a GPU typically uses. So in other words, instead of relying on dedicated tensor cores to run the machine learning operations, the standard GPU cores will run them while using a few GPU resources as possible. |
Main hurdle right now is software and engine support, in particular if both versions of the upscalers require different work to be done. DLSS for example requires motion vectors and material properties to predict optimal uipscaling.