By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
globalisateur said:

Disadvantage actually. Pro has 64 ROPs, XBX only 32 ROPs.

I am sorry. But you are wrong.
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Console_GPUs

I tend to fact check all my claims before posting them, you should try it some time.

globalisateur said:

Even with the clock disadvantage and lower RAM bandwidth the Pro has a slightly better fillrate than XBX. Actual real Bench-marked fillrate on Pro is a bit higher than the XBX maximum theoretical fillrate.

Where the hell have you gotten your false information? Holy crap.

Give me the sources so I can hilariously tear it apart.

For Pixel Fillrate you multiply the number of ROPS by the Clockspeed.
For Texel Fillrate, you multiply the number of TMU's by the Clockspeed.
For FLOPS you multiple the number of shaders by instructions per clock and then by the clockspeed.

The Xbox One X has more ROP's, TMU's, Shaders (I provided evidence.) and a higher clock rate.

Essentially 1+1 = 2.

globalisateur said:

It can be seen in a few games, in some particular scenes that are heavily fillrate bound, the XBX struggles a bit compared to Pro (particularly if the res is higher on XBX).

For instance in scenes where there is a fire in Shadow of mordor on XBX (seen in one of the first DF framerate videos about several XBX games). Also in some rare scenes on COD WWII the Pro game can even output at a slightly higher res than on XBX, and in others many scenes the resolution is very similar in both versions.

Also we know others games run more or less worse on XBX, that could well be explained by a ROP bottleneck. For instance Metal gear survive, Battlefront 2 or Redout.

You are clasping at straws at this point.

In every single metric, the Xbox One X has an advantage over the Playstation 4 Pro. - This isn't even up for debate, to say otherwise shows you to be hilariously uneducated on the topic.

I can't even believe I am having this conversation. I literally can't.

Errorist76 said:

It could also be explained by those Vega features like 16bit floats in Pro‘s GPU, that X1X is lacking. Especially in effect-heavy scenes this can make a world of a difference, if utilised correctly.

Rapid Packed Math cannot be used for everything.
It's lower precision, which means it has an impact on the way a game runs/looks.

And considering that the Playstation 4, Xbox One, Xbox One X, 70% of PC GPU's do not have Rapid Packed Math, it's use is always going to be limited, especially for multiplats.
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/4

AMD has been working with futuremark to show the benefits of Rapid Packed Math. - Their take away? 25% increase in performance in 3DMark Serra.
https://overclock3d.net/news/gpu_displays/amd_rx_vega_-_what_is_rapid_packed_math/1
That is far from a doubling that the theoretical floating point performance would otherwise imply don't you agree?

As for image quality, here is a good comparison between FP16 vs FP32. Aka. Half Precision vs Single Precision. Aka. Rapid Packed Math vs Not... Which just reinforces the fact it cannot be used for everything.











You’re really quite the Klugscheisser here. I never said it can be used for everything. I specifically said it can be used to increase FX performance, didn’t I?

A week ago you were still denying the Pro even uses those features.