Also as for the whole DLSS on AMD. The thing to remember is that Tensor cores are just very specialized cores that accelerate INT-4 and INT8 operations and that sort of a function is also available on RDNA 2 but as with RT implementation on RDNA 2, it's just slower than even Turing but it is still doable.
DF did sort of go over the whole thing back in July with the Series X.
The main benefit of Tensor Cores is this:
What that means is, you take the frame rate the game is running at a given resolution, increase the frame time and you can get the performance penalty. Obviously with more Tensor Cores and ML accelerators, the Render Time becomes less and less.
So as an example.
If a game is playing at 30fps which is 33 ms. With DLSS render time hit, a 2060 would run the game at 28fps vs 26fps on the Series X. Now obviously the Series X is faster in Raster but the Tensor Core performance of a 2080 which is it's main raster competitor is leagues faster. And because of that, same resolution with the same settings and lets say using DLSS/DirectML on both systems, Nvidia will still be faster by a noticeable margin because of the Tensor Cores. But you don't need Tensor Cores to do something like DLSS as the performance advantage of the lower resolution outweighs the performance penalty.
Personally I think by the time AMD has a comparable DLSS competitor, it's going to be 2022 or later. RDNA 2 does one thing and one thing really well and that's Raster performance below 4k. I wouldn't get RDNA 2 for anything else.