Pemalite said:
Chrkeller said:
Rebirth was shockingly well optimized on PC. I ran max at native 4k and indoors locked 120 fps. Outdoors 100 to 120 fps. Â
As for Remake, not even sure my gpu fans turned on. Â
Madden, FC, Elden, Remake all 30 fps. Doesn't shock me, was worried about memory bandwidth being a bottleneck.Â
I am a bit surprised Remake doesn't have a 40 fps mode, the game isn't particularly demanding.
|
I would hope you could hit 4k, 120fps with a 4090... That's the point of a 4090/5090 tier GPU.
curl-6 said:
I wouldn't say any of those are indicative of a bottleneck necessarily; Madden and FC are 30 cos they are ports of the PS5 and Xbox Series versions so just generally built for stronger hardware, (they don't seem to be very good ports either as frame pacing is all over the place) Elden Ring is a From Soft title which are always a bit of a mess, and Remake is based on the Intergrade version, so the upgrades there probably use up the system's extra resources.
Switch 2 does have lower bandwidth than recent home consoles in order to preserve battery life, but I don't think I'd describe it as a bottleneck per se as it doesn't seem to be holding back the rest of the system overly much, relative to say Switch 1.
|
Bandwidth is a difficult thing to quantify this day and age, especially as GPU's get smarter and more efficient with tiled based rendering, compression, procedural generation, large caches, streaming, neural rendering and more.
AMD has managed to match/beat nVidia in many areas with the RX 9000 series vs RTX 5000 series at certain tiers, despite having a significant bandwidth deficit, thanks to sticking with cheaper GDDR6 over more expensive GDDR7.
Chrkeller said:
You want to think 102 gb/s isnt a bottleneck, go ahead. You are wrong. There is a reason the Halo Strix is going after 256 gb/s.
Perma argued with me over this, post launch even he has made comments that 102 gb/s will limit fps and resolution because it will.
And yes, within an ecosystem of hardware doubling fps requires double bandwidth.
Frame per SECOND
Gb per SECOND
Understanding Memory Bandwidth
Impact on FPS
-
Sure it computing is complex, but claiming 102 gb/s isn't going to limit fps is nonsense.
|
Bandwidth is only a bottleneck if you make it a bottleneck.
The Switch 2 has fixed and quantifiable hardware that developers can work around... You would sooner be GPU compute bound before you are bandwidth limited if your game is running lots of shaders.
A GPU is a sum of it's parts, not one factor... Which I think is the aspect you are ultimately missing here.
Will the Switch 2's GPU be bandwidth limited in some scenarios? Absolutely. But a games rendering load is extremely dynamic, bandwidth isn't always going to be the limiting factor.
Many rendering loads tend to be "bursty" in nature when it comes to bandwidth demands, I.E. You need lots of bandwidth to fill up the GPU's caches, but then the bandwidth demands drop off, you aren't going to require your chips full bandwidth 24/7, especially once those work sets are loaded into the chips cache.
Chrkeller said:
But look at what developers are doing across the three main sectors of fidelity.
1) resolution impacts bandwidth. is the S2 rendering games at 360p and look like the witcher 3 on the S1? Nope. Resolution rendering, in many cases is quite high.
|
Everything impacts bandwidth.
However... If you break up your scene into tiles, you need less bandwidth, which is why Maxwell was able to beat GCN. It's called doing more with less.
Chrkeller said:
2) image quality impacts bandwidth. is the S2 rending games with rebuilt assets like Hogwarts on the S1? Nope, image quality (especially textures are quite high).
|
Keep in mind the Switch 2 has a faster CPU, faster and more modern GPU which is capable of significantly more advanced effects, you cannot chalk up the image quality gain of hogwarts to just bandwidth.
Chrkeller said:
3) fps impact bandwidth. is the S2 running game at a reduced fps compared to current gene? YES.
|
Many games run at similar framerates as the Xbox Series S... But FPS can often be impacted by the CPU being insufficient, rather than bandwidth.
Chrkeller said:
Additionally, there is no point in having the CPU/GPU render images that cannot be transferred in a timely manner.
Maybe we have to agree to disagree. I think the GPU is actually above where I thought it would be. But it is limited by bandwidth.
|
That's what caches are for.
Remember the Radeon 9060XT has 320GB/s of bandwidth and sits just below the 5060 Ti which has 448GB/s of bandwidth, that's a deficit of 128GB/s. Which tells us that architectural efficiency is often more important than just unadulterated pure bandwidth.
We could take my old RX 580... I upgraded to the RX 6600XT, same 256GB/s bandwidth, I doubled my performance even at 1440P... I then upgraded to the Radeon RX 9060XT which has 320GB/s of bandwidth, which is an extra 64GB/s... And doubled performance again... That's a four fold increase in performance for only a 64GB/s (Quarter!) increase in bandwidth.
Or let's go back to the Radeon 5870 vs the Radeon 7850. The 5870 has 153GB/s of bandwidth, the Radeon 7850 also has 153GB/s of bandwidth.
The 7850 can beat the 5870 by over 50% with the same memory bandwidth, regardless of resolution.
Efficiency is often the deciding factor over pure black and white bandwidth numbers.
|