Captain_Yuri said:
Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications. Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll. The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.
And there's a bit of an interesting article here too about it: https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off "There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products. Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all. As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower." With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change. |
Welp you can see why Microsoft is not making much fuss about this feature for XBox Series X. A lowly RTX 2060 has double the machine learning performance of a 12 TFLOP AMD RDNA2 card.
Want to bet that disparity is probably even worse for the XBox Series S? The Series S would really benefit more from this than the Series X, but if the Series X can only manage 1/2 the machine learning performance of Nvidia's lowest end RTX GPU, then the 4 TFLOP version of that is probably gonna have problems.













