By using this site, you agree to our Privacy Policy and our Terms of Use. Close
petalpusher said:
Zappykins said:
petalpusher said:
Zappykins said:

There is so much more that 7950 has better.  So it should have a higher performance.  But even with all those advantages, it just over doubles performance, even with nearly three times the CU Cores.  It should be much higher. 

oh it works with the HD 7850 vs HD 7770 too (the HD 7770 still have way higher clock). I just took the extreme example that produces a 100% increase in performance with CU running at way lower clock. In fact it works with every gpu in GCN architecture, if you scale up the CU count, it performs better not on paper but in real world performance, and on the contrary, i would add, it just continue to scale up really well if you have more BW and ROPs in conjonction, just like the PS4 have. Between 7770/7790 and 7850/7870 there s a significant gap, it's a  50% real world performance difference, sometimes even more.

So try to spin the CU advantage over frequency or inefficiency, is just ridiculous.

And lets keep in mind the PS4's GPU have 18 CU, not 16 like the HD 7850, also more TMUs (+4), and more bandwidth than a HD 7870 (145 vs 176 GB/s),  ACEs customizations,..

Just a few days ago, we were debating the funny assumption that the X1 would get a 4.8 Tflops stacked "dgpu" (that was good laugh), now we re back to hd 7770/7790 level that would magically perform better than gpus with 50% more CUs and 100% more ROPs.

Xbox extremists always deliver in hardware discussions.

I completely agree with the highlighted part, but have you read what you are saying?  You say it doesn't work and doesn't matter, but then it magically matters on the PS4.

The 7790 has nearly three times the CU's of the 7950, yet is barely twice as powerful.  So CU's don't make the significant change would would expect on the 7950.  Why does it matter on the GPU cards but not on consoles?  Isn't that what you are saying or do you mean something different.  Not trying to be hostile, just seems contradictory. 


What exactly doesn't work ?

The 7790 has 14 CU (2 more than the X1) and runs at 1000 mhz, while the 7950 has 28 CU and was running at 800 mhz when it was introduced (the latest version runs at 850 mhz on reference cards), but anyway, to me 14 vs 28 CU is two times, right ?

And it performs accordingly to that, more or less two times the performance.

The big picture of the GCN 7th gen is quite clear :

PS4 is between 7850 (16 CU) and 7870 (20 CU) with 18 CU,  the X1 is between the HD7790 (14 CU) and 7770 (10 CU) with 12 CU. The HD 7970 chip is a huge GPU that would have produced 600$ console again and this gpu was too big to make it in a single SOC, the gpu die alone is the same size than the whole X1 system (cpu/esram/gpu and every dedicted hardware silicon) and probably the PS4 SOC too.

It's an efficient architecture, proven one, that does scale VERY well and when you look at performance/watt ratio, the Pitcairns (7850/7870) always have been considered as the best setup, it's the sweet spot of this architecture in efficiency per watt (having 32 ROPs like the HD 7950/7970 is a winner, not because of 50% more CUs). So in fact it doesn't work like in Penello's FUD. The new hd 7790 is really good too on performance/watt, because it is capable of doing 2 triangle/cycle unlike the HD 7770 who was lacking a bit in perf/watt, so im not saying the X1 is not effective, just that having 18 CU is equally effective. There s no inherent loss because there s more CU, and the guy from ars technica debunks that too. That's what forums are for as well, discuss what's information and what's FUD. 

 

ps: (my english sounds like a wrecked train sometimes (most of the time) because im french :D)

Exactly.  He can claim that 50% more cores doesn't net you 50% more power, but he is ignoring the fact that the PS4 also has 50% more ROP's/TMU's/etc.  A matter of fact, the 7970 has double the cores of the 7850 and gues what?  It performs twice as well! 

Then add in the fact that the PS4 has WAY more bandwidth and hUMA, and it is easy to see how it will perform twice as well like some developers have directly suggested.  Get your heads out of the cloud people...