Considering number of ACEs and TrueAudio, I'd say that architecturally it has more in common with VI than SI...
Considering number of ACEs and TrueAudio, I'd say that architecturally it has more in common with VI than SI...
| freedquaker said: As a comparison, XB1 GPU is an underclocked 7790 with much slower memory (despite the fast esram). So the gap is larger than the simple comparison below.
http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+7870&id=324 http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+7790&id=2502 |
Plus, you have to account for the fact that MS reserves 10% of that power for the Kinect, which makes the gap even larger. Either MS completely underestimated how much further of a leap Sony was willing to take OR they really just didn't care and were focusing on a decent leap with a much larger focus on apps. I think it's a combination of the 2. All I know is, man, the graphics gap will be huge with 1st party exclusives in the next year or so.
Aren't they nearly all the same chip? The crappy yield ones just get parts turned off. So it's technically any number of chips?

Really? But its performance in multiplats(namely BF4) is perilously close to an HD7870 rig, though. With optimizations I would've thought it'd easily beat out a Windows HD7870 PC, even at launch, but that doesn't seem to be the case. Why is that?

Are HD7870 GPUs all 'perfect' 20 CU units, are are they themseles the results of binning units, selling defects as lower spec'd GPUs, etc?
In any case, that type of comparison is pretty limited because it's just looking at the grossest stats, e.g. # of CUs, texture units, etc.
What is actually unique about PS4's GPU is the ACE, and it actually has the same # as the top of the line R9 290X,
8x ACE * 8 commands per unit = 64 compute command queo vs. (I believe) 16(!!!) for XBone.
Plenty of areas of a GPU (or CPU) are unused at any given moment, and this GPGPU can take advantage of that,
not to mention GPCPU approaches can in fact fulfill 'traditional graphics' more efficiently than just standard shaders, etc,
e.g. a GPGPU program to cull geometry instead of using shaders to run geometry that doesn't end up getting displayed.
In a way, this is similar to PS3's SPUs (i.e. what gave PS3 the advantage in late games), except it's used how programmers already are using GPGPU on PC.
This is exactly what the PS4 offers as "headroom" for growth/optimization, otherwise it can do the same optimizations that XBone can (but with more CU).
POSSIBLY MAYBE XBone's ESRAM could be used at such high efficiency ratings that the bandwidth might surpass PS4's GDDR5,
but besides that hugely limiting development approaches to fit 32MB window, the benefit of that will also be limited by XBone's fewer CU.
So the term of reference here should be R9 290X (but with unified GDDR5 memory shared with CPU), not simply HD7870 based on simple CU count.
Guys are right...
Spec wise = HD 7870
GCN wise = GCN 1.1 (the same of R9 290X, R7 260)
| Trunkin said: Really? But its performance in multiplats(namely BF4) is perilously close to an HD7870 rig, though. With optimizations I would've thought it'd easily beat out a Windows HD7870 PC, even at launch, but that doesn't seem to be the case. Why is that? |
Yes, without optimizations it is on par with HD7870. Especially in a cross-platform like BF4 they just aren't making those optimizations at this point.
To make those optimizations, they would need roughly 10x the number of GPGPU command usage as the Xbone, and clearly they are just
not going to make that amount of platform specific optimization with unique codepath (that even most PCs can't handle) at this stage in the game.
With Sony's approach to this (ACE), they have chosen an approach that IS in line with general state of the art graphics programming though,
even if the # of ACEs is beyond anything seen in anything but top of the line PC GPUs, the same sorts of programs can and are written for ACEs,
so there is a larger number of programmers out there ready and able to make use of that, and what they learn for PS4 will be applicable to future PC gaming.
PS4's unified memory should further enable synergy between ACE/GPGPU and threads runnin on the CPU, along with the general benefits of fixed hardware,
which XBone also shares albeit with reduced CUs, etc., and informed opinion seems to be that Sony's API is more powerful than MS'.
| mutantsushi said: Are HD7870 GPUs all 'perfect' 20 CU units, are are they themseles the results of binning units, selling defects as lower spec'd GPUs, etc? In any case, that type of comparison is pretty limited because it's just looking at the grossest stats, e.g. # of CUs, texture units, etc. What is actually unique about PS4's GPU is the ACE, and it actually has the same # as the top of the line R9 290X, 8x ACE * 8 commands per unit = 64 compute command queo vs. (I believe) 16(!!!) for XBone. Plenty of areas of a GPU (or CPU) are unused at any given moment, and this GPGPU can take advantage of that, not to mention GPCPU approaches can in fact fulfill 'traditional graphics' more efficiently than just standard shaders, etc, e.g. a GPGPU program to cull geometry instead of using shaders to run geometry that doesn't end up getting displayed. In a way, this is similar to PS3's SPUs (i.e. what gave PS3 the advantage in late games), except it's used how programmers already are using GPGPU on PC. This is exactly what the PS4 offers as "headroom" for growth/optimization, otherwise it can do the same optimizations that XBone can (but with more CU). POSSIBLY MAYBE XBone's ESRAM could be used at such high efficiency ratings that the bandwidth might surpass PS4's GDDR5, but besides that hugely limiting development approaches, the benefit of that will also be limited by XBone's fewer CU. So the term of reference here should be R9 290X (but with unified GDDR5 memory shared with CPU), not simply HD7870 based on simple CU count. |
I do like your analysis here and I am not an expert at the recent GPU technology. My main reason to post this is that it was long speculated that the PS4 GPU was a customized HD7850, but now it is obvious that it is customized HD7870, or something on the same ballpark (so it's technically better than expected). Of course customizations and console specific improvements will unveil much better performance compared to a HD7870 CPU as well, but it's a good reference to begin with.
Another good point is that HD7870 is the ABSOLUTE MINIMUM performance yardstick you can expect from a game. In other words, with no optimization whatsoever, developers can expect at least 20-30% higher performance than on PC. This also means that 90% XB1 exclusives would run equally or better on PS4 without additional optimization whatsoever.
I am loving this generation as it is much more transparent to us, damn it!
Playstation 5 vs XBox Series Market Share Estimates
Regional Analysis (only MS and Sony Consoles)
Europe => XB1 : 23-24 % vs PS4 : 76-77%
N. America => XB1 : 49-52% vs PS4 : 48-51%
Global => XB1 : 32-34% vs PS4 : 66-68%
wow this means ps4 even more powerful then previously thought.
It will be interesting to see what a 7870 equivalent without the restrictions of a bloated OS will be. Could mean great things for Steambox if Sony and partners can show some truly mind blowing stuff on the PS4.