By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Next Switch tech talk

Pemalite said:

You do realize that will impact memory bandwidth? Essentially 8GB can run at full speed, the extra 4GB at a lower speed...

8 is doable... But will hold the system back, especially with how dram hungry Sony and Nintendo's operating systems tend to be for the features on offer.
It's also Nintendo though, they don't like to be cutting edge.

TFLOPS doesn't tell us the hardwares performance.

Nor does it need to match the Series S in TFLOPS to outperform it.

Bold 1: They can do what MS did and utilize that 4GB for OS purposes instead of games. 

Bold 2: I'm going to disagree here. It isn't the DEFINITIVE MEASURE of performance, but it does tell us about the hardware and how it performs. In addition, I included this information because it's one of the few things we have confirmed from the leak, not because it is the only measure. Unfortunately, CPU cores, frequency, GPU frequency, RAM amount or bandwidth, screen resolution, etc. are all just guesses at this point. Typically based on Jetson AGX hardware. 

Bold 3: I wasn't trying to say that it wasn't as capable as the Series S because of TFLOPS alone. But given the confirmation about what is cutbacks from GA10B, as well as portable devices historical limitations and cuts to frequency to maintain appropriate battery life (see PSP, PS Vita, Switch) we already know that it isn't going to be as capable with regard to raw power. Not at the same resolution, at least. 



Around the Network
Doctor_MG said:

Bold 1: They can do what MS did and utilize that 4GB for OS purposes instead of games. 

Yes they can.
Although I would hope not.
There isn't a need for an OS to gobble 4GB of DRAM on a handheld... That is a waste of power and resources.

Doctor_MG said:

Bold 2: I'm going to disagree here. It isn't the DEFINITIVE MEASURE of performance, but it does tell us about the hardware and how it performs. In addition, I included this information because it's one of the few things we have confirmed from the leak, not because it is the only measure. Unfortunately, CPU cores, frequency, GPU frequency, RAM amount or bandwidth, screen resolution, etc. are all just guesses at this point. Typically based on Jetson AGX hardware. 

No it doesn't.
A GPU with less Teraflops can outperform a GPU with more Teraflops in gaming.

It is a hypothetical denominator, not a real world one.

What about INT4, INT8, INT16, FP8, FP16, FP64? Your "teraflops" doesn't tell us squat about those... A game may not use single precision floating point (FP32) at all, it may use FP16, sidestepping your Teraflop counts entirely... Which is a likely proposition in the handheld space due to performance/battery life reasons.

What about Pixel/Geometry/Texture fillrates? Again. Teraflops tells us nothing about those.

At the end of the day... If we take a Radeon 5870 at 2.72 Teraflops and compare it against the Radeon 7850 at 1.76 Teraflops... By your measure and assumption, the Radeon 5870 would win due to having almost an extra Teraflop of FP32? You would be wrong.
They even have the same DRAM bandwidth of 153.6GB/s.

But the 7850 is indeed faster.
Don't take my word for it: https://www.anandtech.com/bench/product/511?vs=549

And that is comparing GPU's from the same company... Things get even crazier if we start to compare AMD and nVidia.

Doctor_MG said:

Bold 3: I wasn't trying to say that it wasn't as capable as the Series S because of TFLOPS alone. But given the confirmation about what is cutbacks from GA10B, as well as portable devices historical limitations and cuts to frequency to maintain appropriate battery life (see PSP, PS Vita, Switch) we already know that it isn't going to be as capable with regard to raw power. Not at the same resolution, at least. 

Doesn't need to match it in raw power. Again... TFLOPS doesn't tell the entire story.

Efficiency is far more important than brute force.
nVidia tends to engineer it's GPU's to do as little work as possible ironically, hence their efficiency jump with Maxwell.




--::{PC Gaming Master Race}::--

Pemalite said:
Doctor_MG said:

Bold 1: They can do what MS did and utilize that 4GB for OS purposes instead of games. 

Yes they can.
Although I would hope not.
There isn't a need for an OS to gobble 4GB of DRAM on a handheld... That is a waste of power and resources.

Doctor_MG said:

Bold 2: I'm going to disagree here. It isn't the DEFINITIVE MEASURE of performance, but it does tell us about the hardware and how it performs. In addition, I included this information because it's one of the few things we have confirmed from the leak, not because it is the only measure. Unfortunately, CPU cores, frequency, GPU frequency, RAM amount or bandwidth, screen resolution, etc. are all just guesses at this point. Typically based on Jetson AGX hardware. 

No it doesn't.
A GPU with less Teraflops can outperform a GPU with more Teraflops in gaming.

It is a hypothetical denominator, not a real world one.

What about INT4, INT8, INT16, FP8, FP16, FP64? Your "teraflops" doesn't tell us squat about those... A game may not use single precision floating point (FP32) at all, it may use FP16, sidestepping your Teraflop counts entirely... Which is a likely proposition in the handheld space due to performance/battery life reasons.

What about Pixel/Geometry/Texture fillrates? Again. Teraflops tells us nothing about those.

At the end of the day... If we take a Radeon 5870 at 2.72 Teraflops and compare it against the Radeon 7850 at 1.76 Teraflops... By your measure and assumption, the Radeon 5870 would win due to having almost an extra Teraflop of FP32? You would be wrong.
They even have the same DRAM bandwidth of 153.6GB/s.

But the 7850 is indeed faster.
Don't take my word for it: https://www.anandtech.com/bench/product/511?vs=549

And that is comparing GPU's from the same company... Things get even crazier if we start to compare AMD and nVidia.

Doctor_MG said:

Bold 3: I wasn't trying to say that it wasn't as capable as the Series S because of TFLOPS alone. But given the confirmation about what is cutbacks from GA10B, as well as portable devices historical limitations and cuts to frequency to maintain appropriate battery life (see PSP, PS Vita, Switch) we already know that it isn't going to be as capable with regard to raw power. Not at the same resolution, at least. 

Doesn't need to match it in raw power. Again... TFLOPS doesn't tell the entire story.

Efficiency is far more important than brute force.
nVidia tends to engineer it's GPU's to do as little work as possible ironically, hence their efficiency jump with Maxwell.


Congrats. You decided to completly ignore that he said its true but we have no other info. Great debating skills. 



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Doctor_MG said:

By the time 2024 comes around and a Switch successor has launched mobile hardware will be at a point where it could match the performance of a Series S, yes. However, expecting this to be in a product that is supposed to sell for sub-400 and powered by 10-15 watts is not likely to happen. Nintendo's history with mobile hardware shows that they aren't going to go for the most powerful product, and the recent Nvidia leak practically confirms this.

That being said, there is additional information that I would like to share. We know that the leak mentions Orin, but many don't know that the CUDA core count was found as well. It's about 1500 for the CUDA core count. GA10B has 2048 CUDA Cores with 4TFLOPS of FP32 compute @1Ghz. Given the information about the core count, Switch 2 will be capable of about 3TFLOPS of FP32 compute @1Ghz. That said, DON'T expect 1Ghz. If the Switch 2 follows it's predecessor then the clock speeds will be pretty sharply reduced (i.e. about 307Mhz-768Mhz. Therefore, portably we can expect around 1TFLOPS of performance, and docked we will get about 2.5TFLOPS docked performance. Provided all of the information in the leaks is up to date and accurate (remember things do change). Personally, I think this is VERY good even if it doesn't match the power of the Series S.

I've read 1.536 core count some time ago as well (6 times the Switch), but if I'm remembering correctly it was more of a guess from the leaker.
It's not a random number since the Switch 2 is supposedly based on Ampere architecture and Ampere GPUs are modular. That is exactly the count of a single Ampere GPU processing cluster (the RTX3080 has 7 for example).

About clock speeds it's reasonable to expect lower clocks compared to OEM form factors (considering both the size of the device and battery constraints) but it's also reasonable to expect higher clock rates compared to Switch1 due to improvements in the chip manufacturing technology. Remember the Switch1 was originally designed around a 20nm chip, here we are talking at least of 8nm.

That said I wouldn't trust too much rumors atm as it seems they are based more guesswork based on similar known architectures. TBH 1.536 cores seem like a lot for essentially a mobile device based with an 8nm chip, I would say that would be absolutely the upper limit.

Last edited by freebs2 - on 23 March 2022

eva01beserk said:
Jumpin said:

Nintendo started planning their next hardware years ago.

I'm sure. I was trying to say that they saw the cost of chips now and said no thank you we can wait. We will just make bank while things settle down then jump on what ever architectures are available at the time. Lets not kid ourselves but Nintendo is not gona go crazzy with their desing. Like the switch they will just buy a gen or 2 old chip from nvidea straight off the shelve

Why though? Their home consoles always had custom or semi-custom designs, even the WiiU.
Switch was kind of an exception in that regard, but it was a different situation. The Switch was (still) an unproven concept and Nintendo was coming after two of their biggest flops to date. The tegra X1 chip was sort of a low hanging fruit, considering Nvidia already spent R&D on it and they didn't found too many use cases back then.

This time Nintendo has proven the potential of the Switch form factor both internally and to Nvidia. Considering the sales volume we are talking about I'd say it's in the best interest of both parties to use an optimal design for the use case. At the same time of course they won't reinvent the wheel, it's most likely going to be a semi-custom solution.

Last edited by freebs2 - on 23 March 2022

Around the Network

interesting debates so far on what people think it will be.

Does anyone actually think they will try to add some additional GPU/CPU hardware into the dock this time around to be able to get 4k, with a bit of help for DLSS?



 

 

Cobretti2 said:

interesting debates so far on what people think it will be.

Does anyone actually think they will try to add some additional GPU/CPU hardware into the dock this time around to be able to get 4k, with a bit of help for DLSS?

I remember there was some talk around a patent for Supplemental Computing Devices (SCD) sometime ago.

But in hindsight it doesn't make too much practical sense on paper since it would mean creating and maintaining different production lines for 2 different chips (one for the console and one for the dock), designing and producing two different heating dispersion systems, and they would need to find a suitable connector for ensuring sufficient data bandwidth between the two processors.

In the end if the use case is the new next gen console, it would be a more effective solution to simply design and produce a more expensive chip on the console (for example using a single chip based on 5nm node rather than using 2 chips based on 8nm).

Even if the use case is simply extending the lifecycle of the current Switch 1 for TV users, it would make more sense to simply release TV-only version of the console that integrates both the CPU and a larger GPU on a single larger chip rather than connecting the Switch to a power dock with a smaller GPU chip.

Last edited by freebs2 - on 23 March 2022

Cobretti2 said:

interesting debates so far on what people think it will be.

Does anyone actually think they will try to add some additional GPU/CPU hardware into the dock this time around to be able to get 4k, with a bit of help for DLSS?

This is pretty much I would expect to actually happen. Heck, I thought it would be something we could have had with the OG Switch, to actually give a boost on TV showings. But I wouldn't bet any money on it.



Cobretti2 said:

interesting debates so far on what people think it will be.

Does anyone actually think they will try to add some additional GPU/CPU hardware into the dock this time around to be able to get 4k, with a bit of help for DLSS?

I'm not technical on this stuff but in old threads from years ago commenting on that idea people who seemed to be more technical would always say it wouldn't be possible I guess because the connection between the Switch and the dock isn't fast enough to allow such a thing.

Anyway, I'm pretty much expecting Switch 2 to be powered along the lines of PS4/XB1. But with it using the DLSS magic to bump resolutions up when docked from a native 1080p to 4k. I'm guessing the Switch screen will probably be like 900p. So essentially docked I expect it to be fairly on par with PS4 Pro or XB1X.

With that sort of power Switch 2 could easily get last gen ports from PS4/XB1 as well as plenty of current gen games.

Consider that Xbox Series games need to be made for the lower powered Series S which I would think Switch 2 would be able to handle. For those games probably drop the resolution and maybe framerate a bit while still keeping framerate solid and make very minor graphical downgrades, much less than PS4/XB1-to-Switch graphical downgrades, and then in docked mode they could bump the resolution back up with DLSS so the games would be pretty close to how they run on Series S at least in docked mode while handheld would have lower rez but it's a tiny screen so it'd still look good).

And at least so far and probably for a bit longer PS5 games are releasing on PS4 as well which means they would be within the range of Switch 2.

So I think we could definitely see Nintendo picking up more current gen (2021 and later) multiplat console games. Which would be a nice addition to Nintendo's library to start adding more AAA multiplat 3rd party games. I don't think they'll specifically target this goal, their goal will probably be to keep price to no more than $300, but I'm guessing by like 2025 when I'm assuming it'll launch this will be doable.



Cobretti2 said:

interesting debates so far on what people think it will be.

Does anyone actually think they will try to add some additional GPU/CPU hardware into the dock this time around to be able to get 4k, with a bit of help for DLSS?

I think it's unlikely to happen because it will add too much cost with not much gain unless they really want to jack the price up. USB4 is certainly fast enough to support external GPUs but having portable device that has it's own GPU + another GPU in the dock within a $400 price point is asking too much imo. And idk if people are willing to pay $450-$500.

I think one reasonable way they could pull it off is to ship the base dock without any GPU and such but with HDMI and etc similar to the Switch but they can sell docks with more powerful GPUs separately. But I don't think Nintendo is the type of company to go that hard into hardware and assuming the Switch 2 has DLSS, it can do wonders even upscaling from low resolutions to 4k.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850