sc94597 said:
Why isn't it already running fine on Steam Deck though? Why is this a common performance issue on RDNA2 and RDNA3 AMD iGPU's and not an issue isolated to the Steam Deck? Why doesn't this problem exist for AMD's dGPUs? Those are all important questions if we are to extrapolate from micro-architecture comparisons. Additionally, what does being a custom port give the SW2 beyond standard Ampere chips running this on PC? See: the RTX 3050 45W example where Switch 2 (docked) is doing about 9.6 fps per TFLOP and the RTX 3050 (45W) with similar (slightly higher) settings is doing about 12-16 fps per TFLOP. Where is SW2's optimization gains beyond generic PC Ampere? I am not convinced "the Switch 2 treatment" -- making a game that looked to be running into CPU bottlenecks, run better by reducing asset density, would help the Steam Deck. I am also not convinced that a game that is running at 18-24 fps @260p internally, can be easily optimized to achieve a solid 30fps @540p when it shares the micro-architecture with systems the game was very much optimized for in the initial build (PS5, Series S/X, AMD dGPU's). Unless, the issue really does have something to do with how AMD's iGPU's work vs. dGPU's -- such as infinity cache maybe being a large part of why a 6700 (which originally paired against a 3060ti not a 3060) is on par with a 3060 in this game and not others. Different games benefit differently from this. |
Unless someone in Ubi goes public and tells us why, I doubt we'll figure out.
What I find odd is you thinking that custom made port for Deck would not solve the issues - say Ubisoft really decides to go crazy and waste money, and takes XSS code (since they are same architecture) and distills it further, instead of what we have now as general PC port, that can't go as low on many things as Switch 2 port does (not just asset density).







