Are HD7870 GPUs all 'perfect' 20 CU units, are are they themseles the results of binning units, selling defects as lower spec'd GPUs, etc?
In any case, that type of comparison is pretty limited because it's just looking at the grossest stats, e.g. # of CUs, texture units, etc.
What is actually unique about PS4's GPU is the ACE, and it actually has the same # as the top of the line R9 290X,
8x ACE * 8 commands per unit = 64 compute command queo vs. (I believe) 16(!!!) for XBone.
Plenty of areas of a GPU (or CPU) are unused at any given moment, and this GPGPU can take advantage of that,
not to mention GPCPU approaches can in fact fulfill 'traditional graphics' more efficiently than just standard shaders, etc,
e.g. a GPGPU program to cull geometry instead of using shaders to run geometry that doesn't end up getting displayed.
In a way, this is similar to PS3's SPUs (i.e. what gave PS3 the advantage in late games), except it's used how programmers already are using GPGPU on PC.
This is exactly what the PS4 offers as "headroom" for growth/optimization, otherwise it can do the same optimizations that XBone can (but with more CU).
POSSIBLY MAYBE XBone's ESRAM could be used at such high efficiency ratings that the bandwidth might surpass PS4's GDDR5,
but besides that hugely limiting development approaches to fit 32MB window, the benefit of that will also be limited by XBone's fewer CU.
So the term of reference here should be R9 290X (but with unified GDDR5 memory shared with CPU), not simply HD7870 based on simple CU count.







