By using this site, you agree to our Privacy Policy and our Terms of Use. Close
dahuman said:
Soleron said:
Viper1 said:
ethomaz said:
It's the Wii U's CPU really bad, Jim Sterling hate again or lazy developer???

...

 

* to illustrate how this works, I'm going to use a series of figures purely for demonstrational purposes.   They in no way accurately depct the real figures but it can give you a sense of how to handle the work load.

Say the PS3 and X360 can handle 6 operations per clock cycle.    At 3.2 Ghz, that's 19.2 billion operations per second.
The game engine is designed around that clock speed.

Now say the Wii U can handle 10 operations per clock cycle but clocked much slower at just 2 Ghz.   That's still 20 billion operations per second.
But the game engine as it was designed is expecting a much faster clock that isn't there.   So the game engine can only operate at 62.5% of it's design capacity.  Now they have to rework the game engine to enable the more efficient clock cycle which is what they promised to be working on but apparently couldn't manage in time.

No that doesn't make sense. Game code responds well to greater IPC or greater clock speed.

The problem is that the Wii U is NOT capable of '10' operations per second. It's worse than the 360, because it's a third of the transistor count. Imagine it's capable of '4'.

If perfectly optimised for each it'd still be better on 360.

I think the only thing better is the OOE part, but like I mentioned either here or in other threads, the overall won't be faster than the Xenon or Cell since I'm also aware of that die size. I'm leaning more like a dual or tri core at maybe 1-1.6GHz tops, which might not be super bad depending on the kind of RISC on that thing. I'm thinking they just want it to execute codes and are not even worried about the physics calculations on that thing LOL, prolly just like "meh, they can offload some to the GPU, done."

 

"“They put a lot of thought on how CPU, GPU, caches and memory controllers work together to amplify your code speed. For instance, with only some tiny changes we were able to optimize certain heavy load parts of the rendering pipeline to six times the original speed, and that was even without using any of the extra cores." -nano assaultassault

It seems it's a PowerPC  but made very customized in which it actual connects the ram with GPU to do some of wii-u. This very much possible or they could be like what you said which would funny as they could have really boasted GPU so it can handle it more with some e dram/ram.

After reading Iwata ask again with the console makers I have say it seem they have a system made perfectly gpcpu system based console.

the xbox360 had Gpcpu but wasn't up to par with current chips so it forced to have CPU clocked high because the GPU even though stronger than ps3 couldn't handle both. Xbox360 is praised for how you use  GPCPU but it's so limited it's a joke. If what they say us true about that GPU of the wii-u is 6760 heavily modded for the ability to team up the CPU you will get some insane looking games. I wouldn't be surprised has some type of thing on GPU it's self just for the CPU which is possible if customized it enough like they did with Xbox360 (having GPU features that were not on the normal chip)

I think nano developer is saying basically "if developers learn the connect between GPU and CPU you will get better performance"

Just for reference from nano assault

"Linzner noted that the Wii U already has plenty of power, but said that it also has a lot of potential for optimising, meaning that in a few years time when developers really start exploring the console’s limits, we ought to see some impressive performance."



"Excuse me sir, I see you have a weapon. Why don't you put it down and let's settle this like gentlemen"  ~ max