By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HoloDust said:
RazorDragon said:

First the Wii U could handle 480SPs based on the die size alone, that's counting the GPU along with the eDRAM. Now it can't handle 320SP in the same space that it was thought it would pack 480SPs, and could only handle 160 SPs? That's not possible, only if that GPU was made using the 90nm process, you know, the same one used in 2006 to make the Wii. So, no, no way it's less than 320 SPs.

I think 480SPs was always a bit of stretch (but hoped for), after the first measurements...

480:24:8 @40nm GPU is 118mm^2

400:20:8 @ 40nm GPU is 104mm^2

I'm still thinking it's 320SPs part, that is hindered by memory architecture. For comparision, 5550 (320:16:8 part), same clock (550MHz), different memory bandwith:

5550 GDDR5/128bit (51.2GB/s) - 27VP

5550 DDR3/128bit (25.6GB/s) - 21.8VP

5550 DDR2/128bit (12.8GB/s, same as suggested WiiU's) - 16.5VP

X360 - something around 13-14VP

Rumoured PS4 GPU - around 140VP

If Nintendo designed in such low level bottlenecks in CPU and RAM performance, would it make any sense to chuck in a GPU that will be constantly throttled? A 320 shader part would be more GPU than the platform can support, it seems, which means a 320 shader part would be wasted silicon and wasted money, and that's really not in keeping with Nintendo's philosophy! The logic behind proportionally massive GPU power isn't there, because things like GPGPU need data to work on and high throughput. Ergo, if a 320 shader part can be proven far more capable than 360 as function does, that is pretty conclusive for me. Nintendo wouldn't not put in a part that capable and then completely gimp it unless their engineers are incompetant.

from beyond3d mod