By using this site, you agree to our Privacy Policy and our Terms of Use. Close

People in this and other websites should not take this information factual for Wii U since there are not much information about the hardware inside it and performance of it. We can only speculate though I am going to share my findings and my research involving Wii U;

Wii U CPU:
- Tri Core IBM 45nm PowerPC750CL/G3/Power7 Hybrid
- Core 0; 512KB Core 1; 2MB Core 2; 512KB L2 Cache
- Clocked at 1.24Ghz
- 4 stage pipeline/not Xenon-Cell CPU-GPU hybrid
- Produced at IBM advanced CMOS fabrication facility
- eDRAM is L2 cache(Power7 memory implementation)

*Wii CPU core was 20% slower than Xbox 360 core, since Wii U CPU is modified/ehanced and clocked 65-70 percent higher thus two Wii U cores should be on par or equal to three Xbox 360 cores or if all three cores are used then Wii U CPU is one third stronger/faster than Xbox 360 processor and faster than PlayStation 3 processor. http://gbatemp.net/threads/retroarch-a-new-multi-system-emulator.333126/page-7#post-4365165

- Dual Core ARM Cortex 8 for background OS tasks clocked at 1Ghz with 64KB L1 cache per core and 1MB of SRAM as L2 Cache, evolution of "Starlet" chip
- Single ARM9 "Starlet" core for Backward Compatibility, rumored to have higher clocks

Wii U Memory:
- DDR3 2GB, 1GB for OS, 1GB for Games
- eDRAM 32MB VRAM + 4MB Gamepad + 3MB CPU L2 cache
- Clocked at 550Mhz
- eDRAM act's as "Unified Pool of Memory" for CPU and GPU thus practically eliminating latency between them

Wii U GPU:
- VLIW 5/VLIW4 Radeon HD 5000/6000 40nm
- DirectX 11/Open GL 4.3/Open CL 1.2/Shader Model 5.0
- Supports GPGPU compute/offload
- Customized, Using custom Nintendo API codenamed GX2(GX1 Gamecube)
- Clocked at 550Mhz
- Produced at TSMC advanced CMOS 40nm
- Uses 36MB eDRAM as VRAM
- 4MB of eDRAM is allocated for streaming feed for gamepad
- GPU is customized and according to modification of GPU's in their previous consoles we can presume that Nintendo won't waste any mm^2 to unneeded features and customized to fit own needs.

*Eyefinity is present on AMD cards since Radeon HD 5000 so it is at least Radeon HD 5000 w/ Terascale 2

*If it is Radeon HD 4000 series and has 320SPU's then it is Radeon HD 4670 55nm though since Wii U uses Eyefinity and GPU is 40nm thus being Radeon HD 4000 series is invalid.

*Radeon HD 6000 was released in Q3 2010, final Wii U silicon was finished in Q2 2012 and released Wii U in Q4 2012. Looking at gap between E3 2011 and final Wii U silicon Q2 2012 plus Radeon HD 6000 evolved from Radeon HD 5000 that evolved from Radeon HD 4000 thus I presume switching to a newer though similar GPU and architecture was not a problem and all of these GPU's were produced at TSMC's fabs.

*Wii U from E3 2011 and its development kits had Radeon HD 4850, it is rumored that Wii U's newer development kit replaced 4850 with modified/customized/cut down Radeon HD 6850.

*Radeon HD 6850 has 1.5Tflops of performance at clock of 775Mhz with 1GB of GDDR5 VRAM and bandwidth of 130Gbps


*GPU in Wii U is clocked at exactly 30% lower at clock of 550Mhz and if it has 1/3 of SPU's thus it has 0.352Tflops. 36MB of eDRAM with 70-130Gbps bandwidth, 2-player co-op as for example in Black Ops 2 Zombie Mode is using Eyefinity(?) to stream two different in-game images/views.

*90nm 16MB eDRAM can do 130Gbps bandwidth, 45n 32MB eDRAM in WIi U should do the same plus since CPU's Cache and GPU uses eDRAM so latency is much much lomwer and AI can be offloaded to GPU

Wii U Note;
- Can’t use DirectX 11 because of OS, only Open GL 4.3/Nintendo API GX2 that is ahead of DX11
- Easier to program, no multiple bottlenecks causing issues as on Xbox 360 and PlayStation 3
- Efficient hardware with no bandwidth bottlenecks
- Launch games and ports use 2 cores and only a couple of ports/games use 3rd core to a decent degree
- Wii U CPU has much higher operations per cycle/Mhz than Xbox 360/PlayStation 3, it is unknown if it is faster
- Wii's CPU core was slower
- Minor Power7 architecture implementation involving memory allows shave off 10 or more percent of texture size without loss of quality(special compression or something)
- Wii U power consumption at full system load is 40 watts(highest load ever possible in its current state)
- Wii U's PSU/power brick is rated 75 watt and has 90% efficiency thus it can handle 67.5 watts
- Wii U's Flash storage, RAM, USB ports, motherboard, fan and small secondary chips consume around 5 to 10 watts in total when fully stressed/used
- Wii U's SoC(CPU and GPU) estimated maximum power consumption is 30 to 35 watts
- Supports 4k displays, native 4k via HDMI 1.4b (possible 2D 4k games?)

*It is maybe in fact most efficient performance per watt in the world in terms of 45/40nm chips/SoC's

In case you are wondering why some games run on Wiii U worse compared 7th generation consoles then I have simple explanation; Wii U hardware is noticeably different than Xbox 360's or PlayStation 3's because their processors and not true CPU's since they can operate GPU related tasks well compared to Wii U that is primarily a CPU. Another reason is that developers don't put spend resources and time on porting the game for Wii U thus it lacks proper optimizations and adaptions of their game engines to Wii U's hardware. Even though some ports perform lower than on 7th generation consoles, in most cases run on higher resolution and/or at Native 720p/HD.

Most if not all 3rd party launch games were made on older Wii U development kits that had 20 to 40% lower clocks than final development kit that Nintendo released near the launch so developers did not had much time to adapt their games to the final devkit thus games were running poorly also games like Darksiders 2, Warriors Orochi 3 Hyper, Batman Arkham City and other were using only two cores while third was not used at all or was barely used to aid performance of the game involving CPU related tasks. Since most ports are from Xbox 360 and/or PlayStation 3 versions of the games there are sure to be incompatibilities since Xbox 360 and PlayStation 3 Processors do CPU and also GPU tasks plus GPU's, RAM/Memory, Latency and other factors are different than on Wii U thus optimizations and adaptations is needed though cheap ports as always tend to be a train wreck. Don't you agree?

Wii U may not be a "leap" as Xbox One or PlayStation 4 though it is a tiny leap over Xbox 360 and PlayStation 3 when looking it very roughly and when taking into account all the implementations, features and "tricks" then the gap is even bigger. Wii U has more embedded RAM than Xbox One that has 32MB of eSRAM while Wii U has 36MB of superior eDRAM for GPU also eDRAM blows away GDDR5 in PlayStation 4 in terms of bandwidth if I am correct? 130Gbps on Wii U versus 80Gbps on PlayStation 4?

We need to take in consideration that Wii U's CPU Espresso has a certain implementation from Power7 architecture that allows usage of eDRAM and we know that CPU in Wii U has total of 3MB of eDRAM as L2 Cache and it could also use main eDRAM pool as L3 Cache and maintain connection with GPU thus creating HSA/hUMA like capabilities and reducing latency by far between CPU and GPU communications and data transfer.

Wii U's GPU was Radeon HD 4000 series in very first alpha development kit and that was Radeon HD 4850 that was 55nm and not RV740 40nm and by that time of first unveiling of Wii U there was Radeon HD 6000 on the scene for nearly a year or now almost three years so Nintendo could easily switch to Radeon HD 6000 series since it is basically evolution of Radeon HD 5000 that is refinement of Radeon HD 4000 series and further it is supported by using Eyefinity features on Wii U's gamepad that was proven by Unity demo on Wii U and Call Of Duty Black Ops 2 when in co-op in zombie mode also Wii U can stream up to two images at gamepad though it hasn't been used and maybe it will be used in near future.

Also people seem to forget about power consumption of dedicated GPU's versus embedded into motherboard ones that naturally consume less plus Wii U's GPU uses eDRAM that consumes 1 to 2 watts max compared to GDDR5 that consumes around 9 or more watts per 2Gb. GPU in Wii U is embedded thus it does not use PCI-E, additional PCB and chips plus has embedded eDRAM in it so consumption of eDRAM could be negated thus power consumption can be radically reduced.

Lets take for example Wii U's GPU die size and Radeon HD 6970 die size and assume that Wii U GPU is VLIW 4 based Radeon HD 6000 series GPU and not VLIW5. Radeon HD 6970 consumes 250 watts maximum and has die size of 389mm^2 and 2Gb of GDDR5 and is clocked at 880Mhz. Lets reduce it to 320 SPU's that will use 80mm^2 thus consumption is lowered to roughly 83 watts then we remove 2GB of GDDR5 and it goes down to 70-73 watts, now lets lower down the clock of it from 880mhz to 550mhz so that is roughly 35% lower clocks and when clocks are 25% then power consumption is cut in half since it is 35% so the GPU consumption goes down from 70/73 to roughly 14-15 watts without being embedded and we could easily shave off couple of more watts and it would most likely go down to 10 watts.

Since maximum power consumption of Wii U is 40 watts and from my calculation GPU consumes roughly mere 10 watts then Wii U's CPU could consume 15 to 20 watts and rest of system around 10 to 15 watts depending on how much Wii U's CPU consumes since it is an unknown factor. I am not counting any possible customizations on the Wii U's GPU, this is all rough estimations and we don't know the whole picture. Wii U is a beast when looking at what nm process was built on and probably most efficient machine on that nm process in the world.

We can not really compare Wii U's GPU to any off-shelf/standard dedicated GPU, since some features that are found in regular dedicated GPU's are not needed when embedded thus with some minor modifications I can see Wii U having 384 SPU's thus at 550Mhz should have 420 Gflops rather than 320 SPU's if it was a standard  dedicated GPU with die size of it. If Nintendo was to do drastical modifications they could even reach 500SPU's and very close to 600gflops. One of homebrew hackers coounted 600 shaders so I am wondering if Nintendo is maybe using one technology that AMD has never used that was from ATI that they also never used and they call it "high density" that is going to be used in AMD's Steamroller cores, from what information AMD released... "High Density" can increase density of the chip by 30% and reduce size by 30% and reduce power consumption.

Involving Wii U's DDR3 RAM, its bandwidth has theoretical maximum of 51,2GBps since it has four 512MB chips and not one large 2GB chip so anyone thinking that its maximum bandwidth is mere 12.8GBps is a tech illiterate or a sucker to Beyond 3D and NeoGAF forums that tech savvy people fleed away or avoiding.

Wii U's Power Brick/PSU is rated 75 watts and has efficiency of 90% so it can handle at max 68 watts without serious degrading and since Wii U consumes 40 watts there is available 28 watts though I would dare only to bump power consumption to 60 watts or 20 aditional watts if I could increase performance.

*I won't like where I got all of this information's though I can assure you I did a lot of research and digging on the internet, collecting bits and pieces and then putting the together into one picture thus: "EUREKA!!"