This thread has made me start thinking that everyone may be looking too far at each extreme to actually accurately guess the GPU that the Wii U is using ...
I was just looking at the Radeon HD 6670 and Radeon HD 6570 and it seems like either might be a good starting point for a low power video game console. As PC graphics cards they seem to run HD console games (that ran at 720p@30fps on the HD consoles) at higher detail settings at 1280x1024 at above 60fps; and that probably translates to 720p@60fps when you're also rendering to the Wii U tablet. At idle these graphics cards run in the 10W range, and they peak at around 60w (a large portion of which is due to the GDDR5 memory).
Through customization, Nintendo could probably maintain (or modestly improve) performance while reducing power consumption. With optimization developers may be able to improve real world performance by 50% to 100% of what was seen on the PC; and developers who wanted to push better visuals could reduce frame-rate to 30fps and (roughly) double detail. At 720p@30fps the best looking games would probably have 2 to 3 times the detail of their HD console counterparts, and these same games running at 1080p@60fps would likely push the most powerful of next generation consoles pretty hard.
Ultimately, what got me thinking this is the question "why would Nintendo seek substantial customization of a high powered GPU to make it energy efficient, or of an embedded GPU to make it powerful enough, when they could use a GPU that was 90% of what they (probably) want"







