Just want to point out that the "AI" part of DLSS doesn't happen on the gaming platform, but the super-computer that trains the model.
Auto-encoders are pre-trained.
The small inference computation is really no different than computing any other matrix multiplication.
A small number of tensor cores and then the RT cores preform this computation.
There is an argument to be made that it is mainly the RT cores and not the tensor cores because a 4090 has the same proportional penalty as a 2060, despite having many more, and more efficient tensor cores.
There is no reason an Nvidia chip with an Turing or more recent architecture can't run DLSS 2.0.
We've also seen with XeSS, which also uses deep learning, that you don't need specialized hardware to run these sorts of models. It just helps to have it.