By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Some interesting excerpts:

"Code optimised for the PowerPC processors found in the Xbox 360 and PlayStation 3 wasn't always a good fit for the Wii U CPU, so while the chip has some interesting features that let the CPU punch above its weight, we couldn't fully take advantage of them. However, some code could see substantial improvements that did mitigate the lower clocks - anything up to a 4x boost owing to the removal of Load-Hit-Stores, and higher IPC (instructions per cycle) via the inclusion of out-of-order execution."

"The GPU proved very capable and we ended up adding additional "polish" features as the GPU had capacity to do it. There was even some discussion on trying to utilise the GPU via compute shaders (GPGPU) to offload work from the CPU - exactly the approach I expect to see gain traction on the next-gen consoles - but with very limited development time and no examples or guidance from Nintendo, we didn't feel that we could risk attempting this work...The GPU is better than on PS3 or Xbox 360"

"I've also seen some concerns about the utilisation of DDR3 RAM on Wii U, and a bandwidth deficit compared to the PS3 and Xbox 360. This wasn't really a problem for us. The GPU could fetch data rapidly with minimal stalls (via the EDRAM) and we could efficiently pre-fetch, allowing the GPU to run at top speed."

Sounds like exactly what I and others have been saying for over a year now; an eccentric design that isn't fully utilised by quick and nasty PS3/360 ports due to its different architecture.

We haven't what Wii U can really do, not by a long shot.