| EpicRandy said: A great comparison I think is to compare the Series S to the 6800U. The 6800U have peak GPU (680m) performance of 3.68tfs which is very close to that of the Series s 4tfs. However the 680m is limited by 16 cu which force it to run at higher clocks with added inefficiency. The 680m is also Rdna3 and the video suggest that such device form MS would use Rdna4 with better efficiency. The 6800U is also used in similar device to what is suggested of a Series S handheld like the Ayaneo 2. |
Teraflops is a garbage Metric. Don't use it.
Less CU's and higher clocks isn't inefficient, it can actually result in better performance per watt.
Let's take the Vega integrated graphics for example on AMD's APU's...
My old Laptop with a Ryzen 2700U @25w TDP verses my other old laptop with a 4500u @ 25w TDP.
They are both based on Vega graphics.
The 4500U
* Vega 6CU's @1,500mhz. - 1.15 Teraflops.
The 2700U
* Vega 10CU's @1,300Mhz. - 1.64 Teraflops.
On paper the 2700u should own gaming performance. - Same graphics architecture, more CU's at a lower clock. Same TDP.
Yet, the 4500u in real world gaming will always win. - Why? It's a balancing act, CU's consume power, clockspeeds consume power, there is an inherent efficiency curve in all processing architectures, where you get the most performance per watt at a given clockrate.
AMD through several generations of trial and error determined that higher clockspeeds can provide more performance even with less CU's... Provided other bottlenecks are also removed like Bandwidth limitations.
I would even pick something like the 5500u over the 4700u, same bandwidth, same CU count, same TDP, but the 5500U has far better GPU performance thanks to just the higher clocks.

www.youtube.com/@Pemalite








