fatslob-:O said:
Give me a list of instruction set architectures which can;t use integers ... |
I was merely clarifying as your post could be misunderstood as implying it doesn't have an integer unit.
fatslob-:O said:
Give me a list of instruction set architectures which can;t use integers ... |
I was merely clarifying as your post could be misunderstood as implying it doesn't have an integer unit.
| curl-6 said: I was merely clarifying as your post could be misunderstood as implying it doesn't have an integer unit. |
If I can read microprocessor manual's then I'm pretty sure I could figure it out ... 
I was just saying that it's SIMD capabilities is not applicable to integers ...
fatslob-:O said:
If I can read microprocessor manual's then I'm pretty sure I could figure it out ... I was just saying that it's SIMD capabilities is not applicable to integers ... |
I didn't mean you would misunderstand your own post, I meant others might.
| curl-6 said: I didn't mean you would misunderstand your own post, I meant others might. |
Thanks for the warning though but I think it's futile to even try among a place such as the internet ...
| mine said:
Not common sense. Common sense is that comilers do the hard work for the programmer. The most important thing the developer can do is studying the effects of their data structures, data and program flow on the CPU caches and act accordingly. So: as the Wii U has MORE L2 cache per Core than the XOne and PS4 Cores. Code runs on the Wii U cores more efficient. But the HD twins make it more then up with more cores and - of course - a better GPU. BTW: take a look at the PS4 developer presentations. The small L2 cache is hurting BIG time when CPU AND GPU are accessing the GDDR... Or the other way round: the Wii Us big L2 cache, big eDRAM and balanced CPU core / GPU enabled to deliver more than most people expected from such a configuration... |
On top of that, even in multithreaded game engines you are going to have one main thread that has a heavier workload than the other threads. The Wii U CPU has an asymmetric L2 cache with 2 MB for one of the cores, and 512 KB for each of the two other cores. That means that it's design has been optimized to run video games.
WolfpackN64 said:
Depends, the POWER8 chips stomp all over the intel Xeons. |
Totally right, although having 96 Treads running at once (12 physical processors 8-multithreading, unlike Intel which does only 2-way multitreading) is the main reason for this. It's IPC (Instructions per Clock) is a bit worse compared to Haswells architecture and it's performance per Watt is even worse by leaps and bounds unless running tailor-made applications.
IBM POWER nowadays is soely developed for high power computing, what makes them strong there is often more a handicap when it would be applied to another use, like consoles. Which is why I thought from the beginning that the Wii U's follow-up would have a totaly different CPU. And since ARM is still too weak (especially when it comes to floating point operations where they get trounced by x86 designs of the same TDP) and carries the Stigma of being a smartphone chip, x86 is the only logical choice for the next generation of consoles (unless for a streaming console, where ARM would suffice).
Also, POWER8 is a bit of a behemoth as it's a freaking huge chip (bigger than almost all gpus in fact), which makes cooling the chip rather difficult outside of a server.
Mind you, IBM POWER could still be used in a console, but the chip would need to be so heavily customized it would barely even resemble anyting from what it spun off. Probably not worth the risk, even more so as you can't know what the chip would bring in the end after stripping away anything a console wouldn't need or couldn't use, as there wouldn't be much left.
curl-6 said:
According to Marcan it has SIMD, just weaker SIMD compared to its competitors. They basically compromised raw power in exchange for backards compatibility. That said, it has some advantages over the last gen HD twins that help mitigate its lower clock speed; better IPC, out of order execution, the ability to offload work to the GPGPU and dedicated audio chip, etc. |
"Weaker" is one way to put as we could see in the further discussion in this thread but either way.
Keeping compatibility sadly didn't work out for Nintendo and I guess it would've been better to go for more power. OOOE is nice, of course but Xenon being IO could be helped with software to help with branch prediction leading to not so harsh pipeline stalls. Btw, GPGPU was also done on 360 but we don't know if also in games: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.219.4747
As of yet it is still unknown by how much games can benefit from GPGPU, especially how much GPU you have to sacrifice to help the CPU (leaving even less for the GPU to do $graphics).
But we were talking about the CPU: Yes, it has 3 cores. Yes, it has quite a bit cache. Yes, it has a good IPC. Yes, it fits nicely in the WiiU and its GPU. But there is no secret about it, no special ability in the CPU hidden. It is a 3 core ppc with a low clock rate where some features are there to help the low clock rate.

walsufnir said:
"Weaker" is one way to put as we could see in the further discussion in this thread but either way. Keeping compatibility sadly didn't work out for Nintendo and I guess it would've been better to go for more power. OOOE is nice, of course but Xenon being IO could be helped with software to help with branch prediction leading to not so harsh pipeline stalls. Btw, GPGPU was also done on 360 but we don't know if also in games: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.219.4747 As of yet it is still unknown by how much games can benefit from GPGPU, especially how much GPU you have to sacrifice to help the CPU (leaving even less for the GPU to do $graphics). But we were talking about the CPU: Yes, it has 3 cores. Yes, it has quite a bit cache. Yes, it has a good IPC. Yes, it fits nicely in the WiiU and its GPU. But there is no secret about it, no special ability in the CPU hidden. It is a 3 core ppc with a low clock rate where some features are there to help the low clock rate. |
No doubt.
And I never said it any super secret sauce or anything, just that, well, like you say, some features are there to help the low clock rate.
Well, GPGPU is likely helpful for more complex FP calculations, especially stuff that can be paralellized well.The important thing is, depending on how much you want to do you might not need that much GPU power. But GPGPU still isnt practical for lots of things.
And every GPGPU stuff gives you a more or less big hit on the GPU power left for graphics. It definitely still has its limits. GPGPU definitely is no miracle nor the source of unlimited powers.
WolfpackN64 said:
The point I'm trying to make is that PowerPC in itself is not outdated, as sometimes claimed, but the Wii U's CPU in a way is. |
This. Don't know if anybody already pointed it out, but I'd like to add that POWER is still a very scalable architecture, IBM mainly designs high-end models currently, but its partners in the project also design models down to cheap and power thrifty single core embedded versions or even multicores where each core is a lightweight version, for high-end routers, for example, that need to execute large numbers of simple tasks.
Ninty chose an evolution of a quite dated version ( not exactly definable in a single POWER or PowerPC family, it has a more dated PPC base with a few more modern POWER7 features http://en.wikipedia.org/wiki/Espresso_%28microprocessor%29 ), but, while POWER8 hadn't been launched yet and POWER7+ was maybe too recent to already have cheap enough versions (the first models launched were fast high-end server versions) when Wii U was launched, Ninty could have easily chosen a POWER7 chip scaled exactly to its needs about power, power consumption and price.