I have a question involving Wii U's CPU, I don't truly agree that is really slow and can you compare it to Xbox 360 and PlayStation 3 CPU?
No they can't for multiple reasons.
Mostly to the fact that it's Out-of-Order execution, better integer and floating point performance, more cache etc'.
I wouldn't be surprised if it ate the Xbox 360 and Playstation 3's CPU's for lunch.
At the instruction set level they can be compared.
I know that Wii U's CPU is PowerPC 750CL so it is an old CPU yet I would not underestimate it that easily, it has 4 stage pipeline and that is really short and should have little to no "bubbles" compared to atrocious Xbox 360/PlayStation 3 CPU with their 32 to 40 stage pipeline that are also in order versus out of order that Gamecube/Wii/Wii U CPU is even thought it is kinda limited as I read in some forums.
The number of pipelines is only a problem if everything is kept equal.
A processor with a longer pipeline but with lots of cache, uop, loop buffer, lots of low latency bandwidth to System Ram and a really good branch predictor (Something the Xbox 360 and Playstation 3 lacks.) can make it all a non-issue, plus a longer pipeline can assist in reaching a higher frequency for an overall larger performance benefit.
My i7 3930K for instance has "up-to" a 19 stage pipeline, one of the fastest CPU's money can buy, but because of all the other benefits, it's certainly significantly faster than PowerPC.
I'm not arguing that the Wii U isn't more powerfull than the Xbox 360/Playstation 3, but it certainly isn't as fast as the Xbox One or Playstation 4 in any regard to the physical processors in all the machines.
I read this article and it seems that WIi U's CPU can directly access and use eDRAM from Espresso, maybe I am wrong;
It can, you aren't wrong.
Wii U has DSP while Xbox 360/PlayStation 3 don't have DSP so audio is done on one of their CPU cores? Right? So only two cores are really for game while a third one acts as DSP also the OS is partially ran on one of those cores compared to Wii U that as rumored has 2 ARM cores that are used as "background" cores also there is another ARM core for backward compatibility with Wii so it could also be used.
I know that Xbox 360 had a bottleneck involving RAM, it was GDDR3 with 22.8GB/s yet the FSB or what ever is called that kind of chip could only push 10.8GB/s and PlayStation 3 also had some sort of bottleneck. While Wii U does nto have any kind of bottleneck and uses DDR3 1600mhz so it has 12.8GB/s like most computers nowadays also it has 1GB for games thus it has like almost 3 times more memory for game assets/data to store temporally. DDR3 has much lower latency than GDDR3 so it is great for the OS and games, right?
Up-to a point, the Xbox 360 for instance had a DAP that will offload Audio Processing "up-to" 256 channels, 48 KHz, 16-bit tracks.
Thus if you wanted to do 24bit or 32bit Audio you would have to use CPU time.
Converesly, the Xbox 360's GPU could also offload some Audio tasks if a developer saw fit, it's "just" flexible enough in order to do so. (More so than the Playstation 3, that's for sure.)
As for Bottlenecks, every computer system, be it a PC, Gaming Console or Phone has some form of bottleneck, be it storage, graphics, processor or system memory etc'.
Essentially the bottleneck is whatever limitation the developers run into first, usually they build within the limitations of the hardware, but bottlenecks can change from one frame to the next whilst rendering a scene due to the fact that data being processed is always changing.
For example a big bottleneck on all the current and next generation consoles if they "theoretically" ran a game like StarCraft 2 would actually be the CPU due to the sheer amount of units that can be on screen at any one time.
If you fired up a game of Battlefield 4, you would find that it's more GPU limited due to the heavy effects that the game employs.
Or if you could run Civilization IV, you would be GPU limited whilst playing your turn, but when you finish your turn and the Computer players do their turn, you would quickly find to be CPU limited.
As for memory latency, both GDDR3 and GDDR5 typically have higher latency than DDR3, however it's not significant, you're looking at 20-30% tops, even then that's going to have a neglible performance difference anyway, due in part to caches and eDRAM/eSRAM and all it's other variations.
Plus, consoles are typically more GPU orientated, GPU's really don't care about memory latency, but rather bandwidth is the determining factor.