By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - XBox Series S Could Be A Nice Bounce For Switch 2

Captain_Yuri said:
Soundwave said:

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know it the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect. You would think especially Sony, if something like that was possible with the AMD GPU they are using, they'd be shouting about it from the rooftops. 

How often exactly is a major, major hardware feature that dramatically impacts performance not even talked about for any gaming hardware 3 months prior to hardware launch?

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.

Welp you can see why Microsoft is not making much fuss about this feature for XBox Series X. A lowly RTX 2060 has double the machine learning performance of a 12 TFLOP AMD RDNA2 card. 

Want to bet that disparity is probably even worse for the XBox Series S? The Series S would really benefit more from this than the Series X, but if the Series X can only manage 1/2 the machine learning performance of Nvidia's lowest end RTX GPU, then the 4 TFLOP version of that is probably gonna have problems. 



Around the Network

It is so funny when you try to pick all the best possible options happening to Nintendo versus the absolute worse for Series S, then pretend DLSS2.0 will make for like 4 times power difference because it is so much more efficient than what is on the Series S... and while at that make developers decide to use Ray Tracing in the most obtuse way that the game would look current gen with it....
Not to forget you were already claiming Switch to basically be same power as X1 because it run Witcher 3 (well that would also mean it is almost same power as X1X), to jump and say that it would in fact be almost stronger than PS5 solely because of DLSS 2.0 and then complain when not taken serious. We will be waiting for this portable Switch 2 that is more powerful than PS5, putting better looking game from objective terms not those "wii have better looking games than PS3 and X360 because of superior art style" shenanigans.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Captain_Yuri said:
Soundwave said:

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know it the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect. You would think especially Sony, if something like that was possible with the AMD GPU they are using, they'd be shouting about it from the rooftops. 

How often exactly is a major, major hardware feature that dramatically impacts performance not even talked about for any gaming hardware 3 months prior to hardware launch?

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.

Don`t forget the Unreal Engine 5 demo on PS5 at 1440p that Digital Foundry said they couldn`t really distinguish the pixels and that was basically making pixel count rather obsolete.

Also both MS and Sony have talked about half precision integer capability on their console that would support the temporary reconstruction technique.

Not to forget as well that PS4Pro made big use of single precision for their checkerbordered technique that since that time had DF saying their checkerbording from 1440p to 4k had almost the same level of clarity as native 4k for a fraction of the cost.

It is silly to think that everyone have been sleeping on the graphical forefront while Nintendo will be the one pushing that boundary by themselves, sure because Nintendo games are the ones that would mostly benefit from it with all their push for fidelity and photorealism right?



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

DonFerrari said:
Captain_Yuri said:

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.

Don`t forget the Unreal Engine 5 demo on PS5 at 1440p that Digital Foundry said they couldn`t really distinguish the pixels and that was basically making pixel count rather obsolete.

Also both MS and Sony have talked about half precision integer capability on their console that would support the temporary reconstruction technique.

Not to forget as well that PS4Pro made big use of single precision for their checkerbordered technique that since that time had DF saying their checkerbording from 1440p to 4k had almost the same level of clarity as native 4k for a fraction of the cost.

It is silly to think that everyone have been sleeping on the graphical forefront while Nintendo will be the one pushing that boundary by themselves, sure because Nintendo games are the ones that would mostly benefit from it with all their push for fidelity and photorealism right?

Personally and I could be wrong for sure on this but I don't think that RDNA 2 for consumers in general will have a direct DLSS competitor regardless of platform. Imo and looking at what we know about RDNA 2 for consumers, it's not being built around our Ai overlords and that's not a bad thing. I think the point of RDNA 2 is to catch up to Nvidia's Rasterization and Ray Tracing performance and it will do that just fine. I do think that Ps5 and Series X will have their own techniques like Insomniac's temporal injection and that will be fine since not everything needs to be Ai driven.

And it's not so much Nintendo pushing boundaries but rather, Nvidia pushing them and Nintendo is just there along for the ride. With that being said, I am not saying that the Switch 2 with DLSS > Series S or PS5 or Series X. Heck the first indication of the switch 2's GPU might be next year but even then, Nintendo could choose to tell Nvidia to not bother with Tensor Cores or Ray Tracing as those require additional wattage in an already very limited wattage scenario. Plus GPU is just one aspect. How is the CPU gonna perform? What's the Vram? Etc.

But I do think at the end of the day. Switch 2 vs Series S will be interesting. While I am not saying the Switch 2 will be as fast as the Series S, I do think the gap could be closer than Switch vs Xbox One.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Soundwave said:
Slownenberg said:

This will definitely not happen, they need a longer life to make sure Switch 2 is roughly on par with Series S. So no, they aren't gonna cut their best money making machine ever just to hope to get third party games on their next system a little sooner, especially when that might actually backfire and make it harder for them to get third party games by releasing a weaker system earlier.

Also, knowing Nintendo, they won't even take all this into consideration, it'll just be the fact that the Switch is super popular so it'll be in the market for a long time and when a successor does come out in like 2024 it'll be able to handle Series S games without much work beyond just the straight porting work.

You have that backwards. 

It is going to be EASIER for Nintendo to get on par with this generation than the PS4/XB1 generation.

People can spin this all day long but the fact is this generation of machines have a much lower floor than the PS4/XB1 did. 

4 TFLOPS is much, much easier for Nintendo to get games from for 2023 than matching a PS4/XB1 was in 2015 for the Tegra X1. 

The equivalent of this is more like if Sony also released a PS4 Lite or something in 2013 that was only 1/3 the PS4 ... the Switch that can already handle ports of Witcher 3 and Dragon Quest XI and DOOM OK .... imagine for a moment many, many PS4 multiplatform games all had performance modes that operated at 1/3 the power of a PS4 ... anyone trying to claim that wouldn't be a major difference is a fucking liar. 

Again so many people are threatened by this idea lol, but whatever. The fact is the floor for this next gen is definitely lower because of the Series S. 

There's going to be a lot of games sitting on XBox Series S that are going to be much, much easier to port to a Switch 2 than the current dynamic that exists between Switch to PS4 to XB1. The Switch 2 doesn't need to exactly match the Series S either, if just needs to be in a somewhat similar ball park, even 2.5 TFLOP docked with a modern architecture like Ampere would do the trick for many, many games. 

A game running at 1080p on Series S (4K on Series X), but you have to understand because of DLSS 2.0 on Switch 2 that same game could run at 360p undocked and 576p docked resolution (upscales nicely to 900p undocked and 1440p docked). There what's left of your performance gap shrinks considerably. 

You don't need even 4 TFLOPS when you're only asking for a modern GPU design to display a resolution of 360p-576p (sub GameCube/PS2 level for undocked, lol).

My point was that they aren't gonna cut off the Switch early just to try to theoretically get new third party games out earlier on a successor. I agree it will be easier next gen for games to get ported to the next Nintendo, especially considering the fact that it looks like devs are gonna target 4k/90fps or something (i've even seen talk of 120fps), so bumping that down to say 1080p/30fps might be pretty much all you have to do to get it running on a next gen Nintendo portable or you know something like that, and I dunno anything about DLSS but it sounds like that'll make it even easier to have a lower resolution while still having the resolution look like it is higher.

And like I said, I doubt Nintendo will even take this into consideration. They're gonna plan their next system based on how the Switch is doing, not based on how powerful the other systems are. But yeah I expect it won't be too hard to port a Xbox game over to the Switch 2. Now whether third parties actually do that is another thing entirely, but there certainly shouldn't be much of a excuse not to anymore.



Around the Network

I think we're headed to an era where console games basically are like PC games.

Instead of devs make bespoke/customized versions for one console, they'll make one generic version with two levels of graphics settings.

XBox Series S = mid-graphics settings + 1080p resolution

Switch 2 = mid-graphics settings + 360p-720p native resolution (w/DLSS 2.0 this will appear as a much higher resolution)

PS5/XBSX = high graphics settings + 1800p-4K resolution

PC = ultra graphics settings + 4K + ray tracing

Games will scale much more freely than in the past. Microsoft is basically "conditioning" developers to make next-gen games run on a much lower end (4 TFLOP) and then I think what devs will look at with the extra power that PS5/XSX offer is to use that for resolution mainly and maybe a bump in some effects. If there's any head room left over maybe try turning on some basic ray tracing effects.

But the industry headed in this direction is good for Switch 2.



DonFerrari said:
Captain_Yuri said:

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.

Don`t forget the Unreal Engine 5 demo on PS5 at 1440p that Digital Foundry said they couldn`t really distinguish the pixels and that was basically making pixel count rather obsolete.

Also both MS and Sony have talked about half precision integer capability on their console that would support the temporary reconstruction technique.

Not to forget as well that PS4Pro made big use of single precision for their checkerbordered technique that since that time had DF saying their checkerbording from 1440p to 4k had almost the same level of clarity as native 4k for a fraction of the cost.

It is silly to think that everyone have been sleeping on the graphical forefront while Nintendo will be the one pushing that boundary by themselves, sure because Nintendo games are the ones that would mostly benefit from it with all their push for fidelity and photorealism right?

It's not really Nintendo that should push the boundary of graphics. It's more a question of Amd vs Nvidia technology. I believe a next-gen Switch will have its own advantages compared even to next gen PS and Xbox, just for the eventual use of latest AI technologies developed by Nvidia. But that said, Switch 2 (if it retains the current form factor) will still be a low power / low spec machine so I agree with you, people should not get their hopes too high on the graphical front. 



Soundwave said:

That's great and all, but I would say again ... where is it? They've shown really nothing of this on any actual XBox, which begs a lot of questions. 

I'm guessing their implementation of this that isn't non-hardware specific has drawbacks in performance cost. Otherwise they would be crowing about it from every roof top. 

Especially if Sony doesn't have an equivalent. Nvidia's DLSS 2.0 is not some smoke and mirrors PR buzz, you can run it on actual games right now. Series X is three months from launch and Microsoft has little to nothing to say about DLSS like implementation. That is pretty hard to believe given the performance implications something like that has. 

Unless of course it doesn't work as well in real world scenarios (or at least relevant to the AMD hardware) as MS has been saying. 

Of course it has performance costs.
It is basically doing the same thing as nVidia's approach, but instead of doing the processing on the tensor cores it is doing it in the shaders leveraging rapid packed math. It's hardware agnostic remember, it can be done on even Intel integrated graphics.

But it also has performance benefits, because lower resolution and stuffs.

But yes, it has been demonstrated, they even got nVidia onboard.
https://devblogs.microsoft.com/directx/wp-content/uploads/sites/42/2018/03/WinML_acceleration_GDC.pdf



Either way... I am finding it a bit concerning that I have provided all this evidence and information for this feature and you still seem to be downplaying it substantially.
A.I is just not as important of a marketing "buzzword" as the real next-gen feature. Aka. Hardware Ray Tracing or SSD's... These days in 2020 the A.I buzzword is used everywhere, even with the camera sensors in modern smartphones. - It's just expected at this point.

But unlike your prior statement... Microsoft has certainly "said" a shit ton about their A.I upscaling... The evidence is in all the linkage I have provided.

Captain_Yuri said:

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

<SNIP>

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.

Even if the Series X and Series S don't have dedicated INT cores, AMD's hardware can do it natively on the shader pipelines anyway.
RDNA natively has support for INT8 operations in it's shaders (But won't be RPM, AFAIK) and will pack two INT16 ops into an INT32... And there is the possibility that RDNA 2 will take that a step further and include INT4.

Temporal re-projection is likely to be a big tool next-gen, DLSS isn't a magic bullet that all developers are clamoring for.

freebs2 said:

It's not really Nintendo that should push the boundary of graphics. It's more a question of Amd vs Nvidia technology. I believe a next-gen Switch will have its own advantages compared even to next gen PS and Xbox, just for the eventual use of latest AI technologies developed by Nvidia. But that said, Switch 2 (if it retains the current form factor) will still be a low power / low spec machine so I agree with you, people should not get their hopes too high on the graphical front. 

Nintendo haven't had the most graphically advanced console since the 5th console generation with the Nintendo 64.

I think at this point it's expected that Nintendo isn't chasing the best graphics in the industry... But at the end of the day, it doesn't really matter as long as the visuals are "good enough" and the games play amazing.

Last edited by Pemalite - on 11 August 2020


www.youtube.com/@Pemalite

Soundwave said:
I think we're headed to an era where console games basically are like PC games.

Instead of devs make bespoke/customized versions for one console, they'll make one generic version with two levels of graphics settings.

XBox Series S = mid-graphics settings + 1080p resolution

Switch 2 = mid-graphics settings + 360p-720p native resolution (w/DLSS 2.0 this will appear as a much higher resolution)

PS5/XBSX = high graphics settings + 1800p-4K resolution

PC = ultra graphics settings + 4K + ray tracing

Games will scale much more freely than in the past. Microsoft is basically "conditioning" developers to make next-gen games run on a much lower end (4 TFLOP) and then I think what devs will look at with the extra power that PS5/XSX offer is to use that for resolution mainly and maybe a bump in some effects. If there's any head room left over maybe try turning on some basic ray tracing effects.

But the industry headed in this direction is good for Switch 2.

I think each console will still get customized versions. But the dev work to adapt the games to different performance profiles will be easier, since engines will be even more scalable than in the past. 

A good example for it is the Nanite tech in UE5. From what I understand it can be used for the engine to procedurally determine the level of geometry detal to render based on a goal set by the developer.

Last edited by freebs2 - on 11 August 2020

freebs2 said:
DonFerrari said:

Don`t forget the Unreal Engine 5 demo on PS5 at 1440p that Digital Foundry said they couldn`t really distinguish the pixels and that was basically making pixel count rather obsolete.

Also both MS and Sony have talked about half precision integer capability on their console that would support the temporary reconstruction technique.

Not to forget as well that PS4Pro made big use of single precision for their checkerbordered technique that since that time had DF saying their checkerbording from 1440p to 4k had almost the same level of clarity as native 4k for a fraction of the cost.

It is silly to think that everyone have been sleeping on the graphical forefront while Nintendo will be the one pushing that boundary by themselves, sure because Nintendo games are the ones that would mostly benefit from it with all their push for fidelity and photorealism right?

It's not really Nintendo that should push the boundary of graphics. It's more a question of Amd vs Nvidia technology. I believe a next-gen Switch will have its own advantages compared even to next gen PS and Xbox, just for the eventual use of latest AI technologies developed by Nvidia. But that said, Switch 2 (if it retains the current form factor) will still be a low power / low spec machine so I agree with you, people should not get their hopes too high on the graphical front. 

Agree with you, but still Nintendo have a say on the matter, and considering they prefer the best bang for buck on cost and watt and tend to chose more conservative and proven specs, ie going for something that is already out instead of jumping ahead on the hw and having something that will launch first on their hw and later on the open market I think that sure some of the tech on Switch 2 or whatever system is named will be more advanced than PS5 and Xbox since it will be newer system and from NVidia (like Switch 1 have some elements more modern than PS4 and X1) but I don't really think DLSS 2.0 or 3.o will be where Nintendo will dedicate so much money and silicon as their games don't really benefit from it.

But sure we will only know for certain when it happens. Also Nintendo games on it will look good and pretty as have been happening for several gens but expecting DLS2.0 or 3.0 to be an atomic bomb capable of picking up a sub 480p image and making it look better than 1080p and closing a 10x power gap will only lead to disapointment.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."