By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

sc94597 said:

Where are you getting this from? 

It seems highly unlikely that the Switch 2 is going to release with a chipset that is "roughly a base Xbox One." GCN 1.0 is pretty ancient compared to Ampere (about eight years older), this Switch 2 likely will have far faster memory (LPDDR5 > DDR3, and no the 32 MB of eSRAM doesn't totally make up for it), and of course a much better CPU than the Jaguar in the XBO. 

Even if we looked at theoretical performance of the rumored, hypothetical Switch Orin Chip, it is estimated to compute 1.9 FP32 TFLOPS, whereas the XBO only was estimated to compute 1.3 FP32 TFLOPS. And again, with the much better CPU, newer architecture, and better ram bandwidth this theoretical performance likely will be better realized in actual games. 

The Switch 2 likely won't be able to play at 4k, but 1080p (or close) and with DLSS 60fps (or higher) definitely is doable. 540p -> 1080p seems to only be a scenario that will happen for the seemingly inevitable Series S to Switch 2 ports, akin to what the Steam Deck does, but likely better if these specs are true or to improve battery life by running games at lower power settings. 

The issue with the Xbox One is that the eSRAM came at the expense of GPU resources, cutting back the Render Output Pipelines, Shader Pipelines, Texture Mapping Units for a high-speed cache... The result was a game of sacrifices rather than leveraging eSRAM to increase efficiency. - Which it can.

If Microsoft hypothetically had an identical GPU/Ram setup to the Playstation 4 and retained that 32MB of cache, there would have been no doubt the Xbox One would have had a graphics edge by a significant margin.

Sure. I don't think that is a disagreement. The 32 MB of eSRAM definitely does help with total realizable memory management, albeit it takes up a lot of space on the SoC at the expense of other resources. The interesting discussion is whether or not if Microsoft decided to go the expensive route and keep the DDR3 + eSRAM setup, but also have a bigger SoC that could match the PS4 by not compromising on other resources. (This wouldn't make sense to do, but we're talking hypothetically.) Would it "have had a graphics edge by a significant margin?" I think it would depend on a game-by-game basis and the nature of the assets being streamed from the memory to the CPU. Any title that was designed around a few but larger assets would probably advantage the PS4, while anything with many smaller assets would probably advantage the hypothetical XBO.