Pemalite said:
We have no idea how the Switch 2's Tegra will stack up against the 3050. It could be multiples cut down and thus worst.
Intel has made it platform agnostic. I think the only hard requirement is INT4 support and DP4a when XMX instructions are not available.
FSR 3.0 is rolling out currently and that brings with it a plethora of improvements that will benefit the Series S.
...Ryzen seems to be doing well on that front.
Honestly I would just like a variable refresh rate display in the Switch 2 rather than any fixed arbitrary refresh rate.
Then Nintendo is likely employing a clamshell memory layout, where 4GB of Ram will operate slower than the 8GB, likely partitioned for the OS/Background tasks, similar to the Series X... Because a 192bit memory bus is a bit much for a cost sensitive mobile chip. |
1. While it is true we have no idea how the Switch 2's Tegra will be like, a low TDP mobile 3050 level of performance (like the one in the video) is in line with the upper-end rumors, especially if they switched from Ampere to Lovelace as some of the more recent rumors allude to. Furthermore, it doesn't invalidate the point I was making that DLSS is a significant improvement even for the lowest-end Ampere chips. The 25W mobile 2050 (which is technically GA107, despite the 20 title) also benefits significantly from DLSS despite being a significant cut down relative to the 3050 mobile. It's often the difference between a game being unplayable or being lockable to 30fps (or 60fps.)
2. Right, the point is moot of whether or not it can technically support XeSS if no games do. Nvidia has the incentive (with Nintendo) to push DLSS hard on the Switch 2 in a way Intel doesn't have with respect to XeSS and current gen consoles.
3. We still don't know how well FSR 3 will compare to DLSS. Like DLSS 3.0, it seems mostly to be a Optical Frame Generation release (bleh), but I am sure they are indeed improving their TAAU solution too. The concern I have with AMD's ability to keep up is that as these Deep Learning Models get better (especially when the modeling process itself is automated by AI), it's going to be very hard for humans to keep up with the heuristic methods they've done in the past. AMD will either have to go the machine-learning route themselves, or find some new innovation beyond TAAU.
It's unfortunate because it gives us less competition in the GPU-space, but the work Nvidia is putting into deep-learning inference in 3d modeling (and gaming) is very wide and deep. Every other month they release a new paper about some method of inference. These are all potential future DLSS implementations.
4. Between 12W - 30W they are doing well indeed. Sub-12W, ARM is still king.
5. Yeah, true, just having a capable VRR display is better.
6. The dev kits ostensibly are coming with 16GB. Having 8 GB dedicated to the graphics fits the performance-spec of the device. Anything more probably wouldn't make sense anyway.