| petalpusher said:
|
I was comparing it to the Xbox 360 and Playstation 3, I apologise, I could have been clearer on that point.
As for OoO being better for poorly optimized code, well, that isn't exactly accurate.
The whole idea of OoO execution is that instead of stalling the pipeline whilst the CPU waits for the next instruction that's in que, OoO processors will instead slot another instruction in between so that the CPU is always constantly being utilised, developers generally don't deal with such intricate details unless you are a big first party like Naughty Dog or 343 Industries.
As for the PowerPC 750x being last Century, yeah I agree to an extent.
Unless you have some low-level information on the changes that has been done to Espresso (And I would like to hear them if you do!) then we have no idea what changes Nintendo and IBM have done to the die.
For example, Intel has "evolved" it's CPU architecture which all stems from the P6 core (With the Pentium 4/D being based on Netburst) thus even today, my Sandy-Bridge-E 6 core/12 thread processor actually has roots stemming from the P6 core introduced with the Pentium Pro in 1995.
Does it make it last century and thus crap? No. It's still one of the fastest CPU's in existence.
| petalpusher said: There s no new SIMD instructions, it's still 2x32 bit SIMD + 1x32 bit integer and the half hundred set of the original Gekko. They ve added L1/L2 and triple the cores, wich is not a miracle on a 45nm process, the cpu remains a very small piece of silicon. It's a short pipeline processor, wich limits it's frequency (less than twice the Broadway). It's a mere 15 GFLOPS cpu, about ten times less powerfull than Cell/Xenon, decent in general purpose, against only one core of the Xenon or the Cell's PPE, it'still weaker. When dev said it was horrible, it wasn't trolling. |
You lost the argument when you used gigaflops, They're only a guage on performance on CPU's of the same type, my CPU can break 100 Gigaflops in synthetics, but it's still superior to the Cell in every single way, here is why: CPU's do more than just floating point math and game engines do more than just deal with floating point numbers.
So, I assume you have a detailed die-shot and understand what everything is and Nintendo has provided a white paper and you know every single detail about the processor? You can add new SIMD instructions whilst providing full backwards compatability with prior variations.
For example SSE and SSE2.
If anything it's a smart decision to retain underlying hardware backwards compatability, developers have had experience with that with prior generations, so it will make development easier.
| petalpusher said: About the gpu, it's still 8 ROPs / 16 TMUs with very limited main bandwidth at 12.8 GB/s, you can't expect much from that, nor aiming for any kind of serious compute pipeline.
|
You would be surprised how well a Desktop GPU with 12.8GB/s of memory bandwidth can handle games, especially at 720P and especially modern GPU's with memory bandwidth saving technologies, like more advanced occolusion culling and AMD's 3dc texture/map compression, in-fact grab a GPU like the Radeon 6450, which is probably weaker than the Wii U, it can actually handle something like Unigen Heaven rather well at 720P and 30fps with most settings on low, even with some low factored Tessellation. - And that's something that's not even programmed at the metal!
Even if the GPU was slower in terms of most specs compared to the Xbox 360 and Playstation 3, I still expect it to do better, it's generally an all-round more efficient architecture.
Besides, you are also excluding the eDRAM from the bandwidth numbers which, when programmed the right way can give a real good kick in the pants when needed.
But don't let logic get in the way.

www.youtube.com/@Pemalite








