Pemalite said:
Bofferbrauer2 said:
Add to this a TDP of 30W, and you can see why Xavier won't make it into a future Switch console.
|
Shrink it to 10/7nm and allot of that TDP would be greatly reduced. - At the moment it is being fabricated on a "12nm" process. (Albeit, more like a refined 14/16nm process which in turn is based on 20nm planar... But I digress.)
Bofferbrauer2 said:
While it still uses LPDDR4, Xavier has a 256bit connection instead of the usual 64bit (dual channel), meaning it has a 4 times higher bandwith than what you would normally expect with LPDDR4. It reaches 137Gbit/s, on par with low power GPUs (for instance, an Radeon RX 560 with 16CU only has 112Gbit/s).
|
It could have a 512-bit LPDDR4 connection with 274GB/s of bandwidth. It is still not enough for 8k.
I am probably the last person on these forums you need to explain bandwidth, bus widths, clock rate and so on. :P
Bofferbrauer2 said:
The GPU is rated at 1.3TFlops, roughly the same as the OG Xbox ONE (1.31, the S is clocked higher and thus a bit faster at 1.4TFlops)
|
Yeah. Using flop numbers in meaningless. It's common knowledge that nVidia GPU's, be it Maxwell, Pascal or Volta is simply more efficient than the Archaic Graphics Core Next architecture in the Xbox One. Nor are the bandwidth numbers even directly comparable either. (I.E. Delta Colour Compression.)
|
A shrink would save power, no doubt about that. But 30W is about 10 times what the Switch is consuming, one shrink alone wouldn't cut it. It would need a 5nm process at least to get it to consume less enough to not drain the battery too fast. Add to this that Nintendo is very conservative in that regard (They want proven hardware and nodes, hence why their hardware tends to be older already at releaser than Playstation or Microsoft's internal Hardware.
Where did I ever say it could play 8K games? It can be happy to run major games in FHD even after the upgrade. I just wanted to point out that, while it's still LPDDR4, the bandwith is closer to entry level GPUs than what we have normally with CPU and hence can support a bigger GPU part without getting bottlenecked so early as LPDDR4 may have implied to other readers here.
I know directly comparing Flops is meaningless, but it can give a rough direction as to how powerful the GPU part is