By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - XBox Series S Could Be A Nice Bounce For Switch 2

Captain_Yuri said:

The Switch 2 vs Xbox Series S is probably gonna be one of the more interesting comparisons next generation and I am very much looking forward to seeing it. The problem right now is that it's hard to gauge on a lot of things without knowing the actual performance of Series S and RDNA 2 vs Ampere. If I were to guess, the next MX series GPUs for laptops that should come out next year from Nvidia should give us a good starting point for the Switch 2's GPU. If the rumours are true and Ampere gets 4x the ray tracing performance than Turing, then it wouldn't surprise me if Ampere MX gets 2060's ray tracing and tensor core performance.

The other questions would be target resolutions and price. Is the Switch 2 going to be $300 or will they bump that up? Is it still going to be 720p or 1080p?

The big unknown is going to be the CPU as Zen 2 is very strong and Nvidia's ARM cpus in the past have been meh. I am not too worried about SSD speeds as I am sure most third party games are going to be very scalable. All in all though, we should have a good idea about Ampere in about a month or so.

Man is it ever refreshing to have someone actually post a thoughtful discussion on the actual topic. Bravo sir. 

I think Switch 2 will go up to $350. Hardware prices eventually go up and Nintendo already wanted the $350 bracket with Wii U, now that they have confidence in the Switch concept they likely will feel confident enough to go up a tier in pricing. 

Resolution can be all over the place, we've seen DLSS 2.0 examples (hacked versions of Control for PC) where people take even resolutions like 288p and run it at 1080p, so I think it's really just up to the developer. 



Around the Network
Soundwave said:

The thing that makes me go "hmmm" about that is Nintendo did basically get Parker too ... Mariko is basically a Parker tier chip just without the Denver core CPU.

And Mariko is something that was leaked as being part of the Switch SDK since at least 2017, so that has been in the works for a while. 

Probably, since Maxwell is Pascal die shrunk 16nm with improvements. DF estimates Mariko die size is 16nm FinFET. Down from 20nm from the original Switch.

It'll be interesting either way to see what comes of this new hardware. Its still a way away. Mariko leaks were over a year from its release and was from dev kit, which the new hardware does not even have yet. I'm guessing its probably 2 years away which sounds about right. 5 year cycle for Nintendo is pretty normal.



Soundwave said:

For starters take the PS4 and cut its power down to 1/3 (600 GFLOPS) for a secondary model. You're full of shit if you're going to claim that wouldn't be a significant difference for the Switch in how easily it could run current game. That changes A LOT of things. 

You are being extremely hostile in this thread, I highly suggest you tone it down Soundwave. Cheers.




www.youtube.com/@Pemalite

Soundwave said:
Captain_Yuri said:

The Switch 2 vs Xbox Series S is probably gonna be one of the more interesting comparisons next generation and I am very much looking forward to seeing it. The problem right now is that it's hard to gauge on a lot of things without knowing the actual performance of Series S and RDNA 2 vs Ampere. If I were to guess, the next MX series GPUs for laptops that should come out next year from Nvidia should give us a good starting point for the Switch 2's GPU. If the rumours are true and Ampere gets 4x the ray tracing performance than Turing, then it wouldn't surprise me if Ampere MX gets 2060's ray tracing and tensor core performance.

The other questions would be target resolutions and price. Is the Switch 2 going to be $300 or will they bump that up? Is it still going to be 720p or 1080p?

The big unknown is going to be the CPU as Zen 2 is very strong and Nvidia's ARM cpus in the past have been meh. I am not too worried about SSD speeds as I am sure most third party games are going to be very scalable. All in all though, we should have a good idea about Ampere in about a month or so.

Man is it ever refreshing to have someone actually post a thoughtful discussion on the actual topic. Bravo sir. 

I think Switch 2 will go up to $350. Hardware prices eventually go up and Nintendo already wanted the $350 bracket with Wii U, now that they have confidence in the Switch concept they likely will feel confident enough to go up a tier in pricing. 

Resolution can be all over the place, we've seen DLSS 2.0 examples (hacked versions of Control for PC) where people take even resolutions like 288p and run it at 1080p, so I think it's really just up to the developer. 

Yea I think $350 would be a pretty good price point as well. One thing I am hoping for is they use TSMC's 7nm or lower over Samsung's 8nm by the time the Switch 2 comes out since TSMC's process should be better for performance/watt which is very important for portables especially. With that being said, I do think that going with Nvidia would still be the correct choice even with Samsung's 8nm since Nvidia's architecture most likely will mitigate process deficiencies even against RDNA 2 using TSMC's 7nm. But we will see of course.

God the next few months are gonna be fap worthy in terms of hardware releases.

Last edited by Jizz_Beard_thePirate - on 10 August 2020

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know it the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect. You would think especially Sony, if something like that was possible with the AMD GPU they are using, they'd be shouting about it from the rooftops. 

How often exactly is a major, major hardware feature that dramatically impacts performance not even talked about for any gaming hardware 3 months prior to hardware launch?

Last edited by Soundwave - on 10 August 2020

Around the Network
Soundwave said:

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect.

Already linked to Direct ML which is the Microsoft DLSS equivalent. Please read the information that I posted prior.




www.youtube.com/@Pemalite

Pemalite said:
Soundwave said:

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect.

Already linked to Direct ML which is the Microsoft DLSS equivalent. Please read the information that I posted prior.

So where is it? 

Please post a link to Sony or MS using this technology for the AMD based PS5 or XSX. 

Not a demo using an Nvidia PC GPU. That is not the same thing. 

It is rather curious to me that MS has not provided any actual demonstration for this on XBox whatsoever, and Sony has nothing to say on the topic at all. 



Soundwave said:
Pemalite said:

Already linked to Direct ML which is the Microsoft DLSS equivalent. Please read the information that I posted prior.

So where is it? 

Please post a link to Sony or MS using this technology for the AMD based PS5 or XSX. 

Not a demo using an Nvidia PC GPU. That is not the same thing. 

It is rather curious to me that MS has not provided any actual demonstration for this on XBox whatsoever, and Sony has nothing to say on the topic at all. 

Again. I have already linked to Direct ML. It's not an AMD technology, never even mentioned nVidia in the original post. - Did you not read my post that was in response to yours?

Either way it's written in here: https://gamrconnect.vgchartz.com/post.php?id=9197491

Or more specifically direct from Microsoft themselves: https://docs.microsoft.com/en-us/windows/win32/direct3d12/dml-intro
With more information here: https://devblogs.microsoft.com/directx/directml-at-gdc-2019/

Microsoft states it's hardware agnostic as it leverages FP16.

If you want the non-technical simplified explanation in regards to consoles themselves, that can also be found here: https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/

Obviously Sony doesn't have anything to say on it, because they run with OpenGL/Vulkan and not Direct X.




www.youtube.com/@Pemalite

Pemalite said:
Soundwave said:

So where is it? 

Please post a link to Sony or MS using this technology for the AMD based PS5 or XSX. 

Not a demo using an Nvidia PC GPU. That is not the same thing. 

It is rather curious to me that MS has not provided any actual demonstration for this on XBox whatsoever, and Sony has nothing to say on the topic at all. 

Again. I have already linked to Direct ML. It's not an AMD technology, never even mentioned nVidia in the original post. - Did you not read my post that was in response to yours?

Either way it's written in here: https://gamrconnect.vgchartz.com/post.php?id=9197491

Or more specifically direct from Microsoft themselves: https://docs.microsoft.com/en-us/windows/win32/direct3d12/dml-intro
With more information here: https://devblogs.microsoft.com/directx/directml-at-gdc-2019/

Microsoft states it's hardware agnostic as it leverages FP16.

If you want the non-technical simplified explanation in regards to consoles themselves, that can also be found here: https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/

Obviously Sony doesn't have anything to say on it, because they run with OpenGL/Vulkan and not Direct X.

That's great and all, but I would say again ... where is it? They've shown really nothing of this on any actual XBox, which begs a lot of questions. 

I'm guessing their implementation of this that isn't non-hardware specific has drawbacks in performance cost. Otherwise they would be crowing about it from every roof top. 

Especially if Sony doesn't have an equivalent. Nvidia's DLSS 2.0 is not some smoke and mirrors PR buzz, you can run it on actual games right now. Series X is three months from launch and Microsoft has little to nothing to say about DLSS like implementation. That is pretty hard to believe given the performance implications something like that has. 

Unless of course it doesn't work as well in real world scenarios (or at least relevant to the AMD hardware) as MS has been saying. 



Soundwave said:

By the way where is this magical Microsoft DLSS equivalent? Not one peep out of them about that regarding XBox Series X where they've discussed every other hardware feature ad nauseam.

The XBox division has never mentioned this tech at all, which is curious. MS did have a presentation on it, but wouldn't you know it the GPU they were using to demo that was Nvidia hardware, not AMD.

If Series X can do that why not use it for that Minecraft ray tracing demo that tanked the hardware's performance down to 1080p. Surely you could render it at even 720p native and then scale up to an even better 1440p if the chip was capable of doing so? You would get an even better image quality while actually taxing the system far less, so it kinda begs the question where exactly that super duper ML tech is. 

Because we know for Nvidia it's here and it's now, no hype, no fuss, there are games using it that you can play right now. 

So one has to ask exactly why they're not using that. My guess is on the PC ML demo they showed they were using Nvidia's Tensor cores to help achieve that effect. You would think especially Sony, if something like that was possible with the AMD GPU they are using, they'd be shouting about it from the rooftops. 

How often exactly is a major, major hardware feature that dramatically impacts performance not even talked about for any gaming hardware 3 months prior to hardware launch?

Well there is a slight catch as of right now based on Xbox Series X RDNA2 specifications.

Tensor cores are very specialized cores that can accelerate INT-8 TOPS which is what's used to Render DLSS according to Digital Foundry. The thing is, the Series X and Ps5 and probably the Series S will have cores that can also do them however not as fast as Nvidia's Tensor cores. Now due to architectural differences and software differences and etc, maybe RDNA 2 might not need them to be as fast so who knows. I am not that technical into how DLSS works at a low level loll.

The point is though, with what we know now, if we were to port DLSS over to the Series X. It would take twice as long to Render DLSS compared to a 2060.

And there's a bit of an interesting article here too about it:

https://www.eurogamer.net/articles/digitalfoundry-2020-image-reconstruction-death-stranding-face-off

"There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.

Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.

As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower."

With that being said though, there are a lot more months for both Sony, MS and AMD to reveal more and more about their GPUs and other features so things could change.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850