By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - VGC: Switch 2 Was Shown At Gamescom Running Matrix Awakens UE5 Demo

So confirmed not a rumor?



Around the Network

The Steam Deck comparison is really silly.

1. The Steam Deck, as impressive as it is, runs games through a compatibility layer (Proton) that provides on average a 10% performance hit. Valve has done a lot toward reducing that hit in optimized titles, but basically any game they haven't fine tuned for is going to get a 10% hit on the platform versus running directly in  Linux. This is a software problem, not a hardware one. If games were developed directly for Linux/Steam OS, the Steam Deck would gain a significant performance boost in many of them. 

2. By the time the Switch 2 releases, the Steam deck will be almost three years old, and we'll probably be talking about an incoming Steam Deck 2 release at that point. 

3. The Steam Deck has about the same power demands as a Nintendo Switch (maxing out at 15W tdp.) 

4. Despite looking like one, the Steam Deck is not a closed platform. It runs games that had to be optimized for a wide-range of hardware. 

A Switch 2 released on a 5nm process node is going to get efficiency gains, by the simple fact that 5nm provides about 30% performance-per-watt over  7nm. This can be utilized to reduce cooling (while giving the same performance as a Steam Deck), improve performance while requiring as much cooling, or a little bit of both. 

Things often don't make sense if you don't know basic details about them. 

Last edited by sc94597 - on 11 September 2023

The main point is PS5's GPU will (or may) perform notably better than Switch 2's on a "per TFLOPS" basis even if the tech powering the latter is more advanced. It's safe to say the gap in tech between SteamDeck and PS4 is a lot greater than that between Switch 2 and PS5's. And I look forward to the day PC folks make up their minds as to whether or not "optimization" remains a console advantage. Because whenever I bring up that optimization is indeed a console advantage, I get a bunch of them lecturing me that this is no longer the case.

If you think Switch's more advanced tech will translate to better performance per TFLOPS (vs PS5), you're just setting yourself up for disappointment. Lower your expectations and you won't be disappointed.

Last edited by Kyuu - on 11 September 2023

Kyuu said:


If you think Switch's more advanced tech will translate to better performance per TFLOPS (vs PS5), you're just setting yourself up for disappointment. Lower your expectations and you won't be disappointed.

It gets even trickier than that. Depending on the data-paths available in the shading units, "maximum TFLOPS" can be counted in a way that the TFLOPS number becomes inflated (with respect to gaming workloads, not say scientific computing.) Nvidia is counting FP32 and FP32/INT32 cores in the estimate of their TFLOPS  since Ampere, for example. This is despite in a gaming load 26% of the calculations are likely INT32 and not FP32 (according to Nvidia's estimate.) 

See: https://www.neogaf.com/threads/nvidia-ampere-teraflops-and-how-you-cannot-compare-them-to-turing.1564257/

And then there is the complication that is ray-tracing and "ray-tracing tflops." 

Using Tflops as a metric of real-world gaming performance (again, as opposed to say - scientific computing) between different architectures, platforms, and especially the brands is just not a good idea. 

I might have missed it though, where were people discussing performance per TFLOPS? 

Last edited by sc94597 - on 11 September 2023

sc94597 said:
Kyuu said:


If you think Switch's more advanced tech will translate to better performance per TFLOPS (vs PS5), you're just setting yourself up for disappointment. Lower your expectations and you won't be disappointed.

It gets even trickier than that. Depending on the data-paths available in the shading units, "maximum TFLOPS" can be counted in a way that the TFLOPS number becomes inflated (with respect to gaming workloads, not say scientific computing.) Nvidia is counting FP32 and FP32/INT32 cores in the estimate of their TFLOPS  since Ampere, for example. This is despite in a gaming load 26% of the calculations are likely INT32 and not FP32 (according to Nvidia's estimate.) 

See: https://www.neogaf.com/threads/nvidia-ampere-teraflops-and-how-you-cannot-compare-them-to-turing.1564257/

And then there is the complication that is ray-tracing and "ray-tracing tflops." 

Using Tflops as a metric of real-world gaming performance between different architectures, platforms, and especially the brands is just not a good idea. 

I might have missed it though, where were people discussing performance per TFLOPS? 

Maybe I misread and I don't feel like going back to reading every comment, but I'm pretty sure Soundwave suggested that the Switch 2 could do more than Series S/X and PS5 (which he at least twice referred to as a "RDNA 1.5") per a TFLOPS due to the more advanced architecture and technologies like DLSS. Basically, he thinks Switch can "effectively" do nearly as much with 3 or 3.5 TFLOPS as Series S can with 4TFLOPS.

I know very well that TFLOPS is a misleading metric, but it's what countless people still use to know how powerful a console is on paper. And they determine a TFLOPS efficiency based on how it materializes in games (resolution, fps, settings etc). I know it's not that simple and countless factors go into this.

Last edited by Kyuu - on 11 September 2023

Around the Network

He never said he said it would be close like series s power basically. Anyway the conversation has been beaten  to death and I guess we'll just have to wait and see to what the results are . I personally think it's gonna be around steam deck with DSLL because I always here the best possibilities from N fans since the wii and it's always the worse

Last edited by zeldaring - on 11 September 2023

Kyuu said:

I very much doubt Switch 2 is going to have more "realized power per a TFLOPS" than Series S or X, let alone PS5. Being more advanced and recent will not make up for downsides associated with its handheld nature. So even when disregarding the deficiencies in CPU, storage system and RAM, I would expect Switch 2's GPU to be less efficient than the current consoles.

PS5's so called "RDNA 1.5" is proven to be more performant "per a TFLOPS" (and per buck) than Series S or X with their Velocity Architecture, ML acceleration, DirectStorage, hardware accelerated VRS, true RDNA2, and other buzzwords. PS5's supposedly inferior RDNA implementation (which was supposed to overheat and throttle to just 8 TFLOPS according to "experts") is literally more efficient according to the real world results.

Even the PS4 with its outdated GPU design and terrible CPU holds its own against the SteamDeck, which also has twice the RAM amount (but lower speeds). Per a TFLOPS, PS4's GPU is more or less as performant/efficient as the much more technologically advanced SteamDeck GPU.

I think some of you are underestimating handheld limitations.

Thank you.



What happened to the bet ?



I'm still down for my bet series s will win easily.

Another thing why do developers not like using fsr on ps5?

Last edited by zeldaring - on 11 September 2023

zeldaring said:
Chrkeller said:

Wait, VG has a way to ignore people?  Well that solves my problem.  Thanks my friend.

I was thinking...  if a PC gamer told me my ps5 couldn't produce the visual fidelity of his RTX 4090, not only would I not argue for pages but I would agree.

Meaning I don't know why some take discussions about hardware so personally.

you know why he wont agree but i guess we are so bored that we kept responding.

Ps3 pro is exactly where the switch was which is not a bad thing at all. i mean just look all the ports of wiiu to switch and ps4 to ps4 pro. I think ps3 pro is actually where it line up perfectly.

There are many hardware differences between a PS3 and Playstastion 4 outside of performance, aimed at improving efficiency and visuals, many of which the Switch has or sometimes even exceed.

* Polymorph Engines. (Tessellation)
* Shader model 6.7 (PS3 was SM3, PS4 SM6.5 )
* Vertex Shader 5.0
* Geometry Shaders
* Support for higher resolution (16k textures) - Not to be confused with display resolution.
* Compute Shaders.

And more. It's not just about raw throughput, but about hardware feature sets and efficiency.

So calling the Switch a "Playstation 3 Pro" is highly disingenuous, it's nothing like it from a high or low level.

Oneeee-Chan!!! said:

The GPU of the PS3 is equivalent to a 7600GT.
If there is a PS3 pro, it will be equivalent to 7800GTX or 7900GTX.

PS3 had a Geforce 7 class GPU with a core layout of: 24:8:24:8 (Pixel shaders: vertex shaders: texture mapping units: render output units)
Core: 550Mhz. Memory bandwidth: 20.8GB/s

It GPU core is like-for-like of the Geforce 7 7800 GTX 512MB... But with less than half the bandwidth and half the ROPS.
7800GTX: 24:8:24:16 (Pixel shaders: vertex shaders: texture mapping units: render output units)
Core: 550mhz. Memory bandwidth: 54.4GB/s.

The 7600GT has half the functional units with a core layout of 12:5:12:8 @560Mhz and 22.6GB/s of Ram.

In non-memory bandwidth (I.E. Shader heavy) scenarios, the PS3 should be able to get fairly close to the 7800GTX, but once you bog it down with Alpha effects, it will come up short.

Oneeee-Chan!!! said:

Really?
I had heard that the GPU specs were changed just before the PS3 launch.

Besides, aren't the GPUs of PS 3 and Xbox 360 almost equivalent?

If the PS 3 GPU is equivalent to 7900 GT, then which GPU is equivalent to the Xbox 360 GPU?

I was thinking about x800.

The Xbox 360's GPU was more powerful, it was the CPU where the PS3 had a commanding lead.

The Xbox 360's GPU was closely related to the Radeon 2900, but with a lower core clock and memory bandwidth.

AMD Backported the Radeon 2900 design for Microsoft and made it a custom chip.

sc94597 said:

1.7Ghz boost is only possible in the top-powered model (80W.) The lower TDP models (the one in the video was a 45W model with a max boost clock of 1.35 Ghz) have much lower base and boost rates.

https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Laptop-GPU-Benchmarks-and-Specs.513790.0.html

I was only talking about the 80w model, the highest-end configuration.

sc94597 said:

Also the RTX 3050 mobile is VRAM limited with only 4GB. In a game like Final Fantasy 7 Intergrade (among many other more recent releases) 4GB is a significant bottleneck at 1080p.

The 3060 with 6GB isn't much better.

It's nVidia's M.O.

sc94597 said:

A Lovelace Tegra with a TDP of about 15-20W and 8GB of allocated video memory available could be competitive with an RTX 3050 mobile @35W-45W. Especially in a closed-platform where game optimization can be targeted. I'd expect it to be even more true as games continue to become VRAM hogs. 

"

I don't think it necessarily will happen, but it is the upper-limit of possibilities. 

Edit: It wouldn't be surprising at all if the performance difference between the Switch 2 and an RTX 3050 35W was less than the performance difference between the RTX 3050 35W and the RTX 3050 80W. 

You might be on the money, but without confirmation it's just a hypothesis.

zeldaring said:
Chrkeller said:

People need to stop using teraflops. It is a worthless measurement. The ps4 pro has more flops than the series s. The ps4 pro isn't more powerful.

Its not really worthless measure, it's just series S has a vastly better CPU and everything else.  

No. It's literally a worthless measure.

RDNA3 on a flop to flop basis will perform worse than an RDNA2 part.. Because AMD introduced a dual-issue pipeline.

That means it can "theoretically" have Twice the flops, but it will never have twice the performance as it can only issue a second instruction when it can extract a second instruction from the wavefront.
And when that is unable to happen, you will only get "Half the flops".

This is the same wall AMD found itself with prior VLIW/Terascale designs and why they opted for Graphics Core Next in the first place, it then becomes very driver/compiler heavy.

This is the dumbed down version of course.

The Series S actually has a better GPU than the Playstation 4 Pro.

Teraflop numbers you see floating around are all theoretical, not real world benchmarked numbers.



--::{PC Gaming Master Race}::--