curl-6 said: Trim those quote trees, guys. ;) And for those of you who weren't into it or around then, we had spec wars like this at the start of the 6th and 7th gens too, and people claimed the PS2 was stronger than the Xbox and Gamecube or the 360 was stronger than the PS3 based on specs like these Wii U ones. |
This is completely different. PC gamers now have an entire history of AMD GPUs spanning from HD7000 and below. When the GPU in Xbox 360 came out, we didn't even have a unified shader GPU architecture on the PC as that only arrived in 2006. At Xbox 360's launch time this made it impossible to estimate or understand how the GPU compared to PC parts. After HD2000 series released, we knew that Xbox 360's GPU was more powerful than PS3's RSX. That's not even up for debate among people who researched the specs. You can say without a shadow of a doubt that the GPU inside Xbox 360 is more powerful if you don't talk about the CPUs.
We know the die size of Wii U's GPU and the console's total maximum power consumption and die sizes of all of the important R700 chips made on 40nm nodes. There is no magic sauce in R700 GPU design. It uses an outdated VLIW architecture. If the architecture is not Graphics Core Next, the GPU in Wii U is already outdated automatically, no matter what it is because we know it's a 40nm part made by AMD and there are no 40nm GCN GPUs. If the power consumption of the Wii U in games is 35-40W, the console cannot be powerful. It's just not possible. So yes, you definitely can estimate the performance of hardware looking at GPU manufacturing nodes and the total power consumption, especially since you can rule out that Wii U does not use the most advanced GPU architectures (Kepler or Graphics Core Next). That means the GPU is automatically less efficient per clock cycle.
Look at the size of flagship GPUs on the PC. They are nearly the size of the Wii U itself. AMD's flagship HD7970GE uses 235W of power, manufactured on 28nm node (which is 60% more efficient than 40nm one at the same operating transistor frequency). This would mean that GPU inside the Wii U would use 587W of power if it were made on 40nm node of Wii U and clocked at 1.05Ghz. Based on this estimate alone, and Wii U's power usage of 35W in games, the GPU in the Wii U is 16.7x slower than an HD7970GE (587W / 35W ~ 16.7x)

No matter how you look at it, from a power consumption, die size, or maximum specs of R700 vs. die size comparison, you end up arriving at very similar conclusion = Wii U's GPU is extremely underpowered to be a next generation console.