By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Some people have been pointing to the reference to R600 in the stuff from Marcan to conclude that the Wii U GPU is R600-based. There's a few problems with that theory.

For one, R600 was manufactured at 55nm at best. For another, the 3850 was a 190 mm^2 die, way too big. More importantly, the command he gave was a Linux command. Linux runs GPUs using drivers, and those drivers don't always have names matching those of the chip they're driving. For instance, the R600 graphics driver for X.org covers not just R600/700, but also Evergreen and Northern Islands chips. Evergreen and Northern Islands are 40nm chips. Given the way that drivers work, it seems likely that this means that register names, etc, also match.

The "earliest" chip to have 40 nm fabrication was an R700 chip, specifically the 4770. Of course, since it had 640 SPs, ran at 750 MHz, and drew 80 W of power, we can rule that chip out, too. No other R700 chips were 40 nm.

That brings us to Evergreen. which happens to have a suitable chip - the 5570, AKA Redwood LE - same clock speed, SP count matching the one currently speculated for the Wii U GPU, and a power draw maxing out at 39 W - a number that could probably be brought down by more modern alterations.

One of the oddities is that many are saying the Wii U GPU's memory bandwidth is 12.8 GB/s... this is strange, as that number is listed for the 5570 only when using DDR2 memory, whereas the GPU uses GDDR3 memory, which is listed with twice the memory bandwidth.

Of course, this isn't a 5570, either. As has been said so many times, this GPU is not a standard GPU by any measure. This leads me to suspect that it'll be found to be closer to a more powerful GPU, but with lowered clock and fewer SPs. Something like, say, the 6570M, which has 400 SPs, a clock speed of 650 MHz, and a power draw of 30 W. With a reduction from 400 SPs to 320 SPs and a reduction of clock speed from 650 MHz to 5500 MHz, you would expect a much lower power draw, while still managing 352 GFLOPS.

But this is speculation on my part, based on mathematics rather than specific knowledge of how GPUs work, and what influences power - all I know is that underclocking often decreases power usage more than linearly, and that you'd expect a reduction in power usage with reduction in number of transistors.