By using this site, you agree to our Privacy Policy and our Terms of Use. Close
JRPGfan said:

So far we have 6 hatched chickens (games releases, were you can compaire head to head).
5 outta the 6, run better on the PS5.

You can blame SDK and how well versions are optimised for each console.... and maybe theres truth to that (we dont know).
So far, all we know is how these games stack up head to head.

*edit: Though we have had Dirt 5, devs say the SDK was fine, and about same state for both.

I'm not blaming it on anything. I am sure we will get clarification from the "horses mouth" at some point.

Assume nothing.

JRPGfan said:


1) AMD GPU's have 8-10 CU's pr Shader Array because this is most effecient (ei. drawbacks to more).
The Xbox has 14 CU's pr Shader Array, thus loseing abit of performance on this count.

I have read the AMD RDNA whitepaper and actually can understand it. - You sure you want to go down this rabbit hole?
Hint: There are Pro's and Con's to each approach, neither approach is incorrect.


JRPGfan said:


2) Geometry engines are differnt between the two consoles.

And which do you think is better?
Note. I am asking what *you* think is better. I already know the answer.

JRPGfan said:

3) Higher clocks on GPU, is better if say a work load, miss-fire's or isnt used.  This is like how Intel single thread performance, favored gameing for along time. (where fewer cores going faster, worked out better, than more cores but slower)

Not really. Your understanding on graphics rendering pipelines is clearly limited on this front it seems.

The difference between a CPU and GPU is non-comparable.

CPU's specialize in serialized, highly complex, branching workloads that are extremely intricate.
GPU's however specialize in parallel, relatively simple workloads, but there are lots of them.

There is a reason why a GPU in 2020 is approaching 5120 "cores" where a CPU tops out at around 64 "cores" and a CPU will push higher clockrates than a GPU... Because a CPU needs to do as many instructions per clock to optimize throughput... GPU's can just blow out core counts to achieve the same.

CPU's also allocate a massive amount of die area and transistors in obfuscating memory access to keep those powerful cores fed and going, GPU's generally don't, they just allocate another task because in graphics there is always more work to be done.

JRPGfan said:


4) PS5 has Cache scrubbers, for the GPU, that allows it to dump partial loads of data. This leads to a performance increase.

The Xbox Series X can cache snoop and do the same and eject data from caches from each chip.

JRPGfan said:


5) Sony did something to the shaders. (this is just me repeating what GeordieMp wrote) Says this is apparent from Patents.

And what is that "something" and what is the significant of that "something?"
Stay tuned. Because right now, no one knows.

RDNA is built on top of the GCN foundations that the Xbox One and Playstation 4 came with, it's actually a highly modular design, I wouldn't be surprised if there was some customization in order to bolster backwards compatibility.

JRPGfan said:


6) PS5 cpu appears to have shared cache (unified), while the same isnt true on the XSX.

That is because the Xbox Series X is using standard Zen CCX's.

We are not sure if the Playstation 5 is doing the same yet, no point speculating without all the facts laid out on the table.

JRPGfan said:

7) MS sacrificed some consoles centric performance efficiencies by adding more API layers to help in their Gamepass, no-generations thinking.


This has always been the case.

The Original Xbox launched with Direct X 8 (Or rather a derivative of such) and a low-level API, developers had the option to use whatever suited their development goals, Direct X allowed for easier development at the expense of performance.
But developers could also use the low-level API which resulted in games like Half Life 2, Morrowind, Doom looking like the ducks nuts at the time.

The Xbox 360 launched with Direct X 9 (Or rather a derivative of such) and it's own low level, the start of the generation every developer and their pet cat was running with Direct X, but towards the end of the generation developers *had* to use the low-level API to make games like Halo 4 possible.

The Xbox One launched with Direct X and it's own low-level API's as well, both of which received refinement and updates during the entire generation, developers initially started out leveraging Direct X, in-fact many smaller-scale titles still use Direct X, especially indies.

But any game pushing graphics boundries are using the low-level API.

Contrary to popular belief the Xbox has more than just Direct X.


And Sony is no fucking different.

The Playstation 4 for example has LibGNM as it's low-level high-performing API, But the console also has Vulkan and OpenGL (Direct X 12 and 11 competitors essentially.)
Developers who wish to push the graphics envelope are obviously using LibGNM, where-as earlier launch titles relied on Vulkan or OpenGL... And many smaller scale games, especially indie are relying on Vulkan and OpenGL even today.

And the exact same thing exists on the Playstation 5.

So whilst you are criticizing the Xbox for doing something... You have been absolutely blinded to the fact that Sony does the *exact* same thing, they are all different tools to help different types of developers to build and release games for a platform.

Consider yourself educated on this topic... And hopefully no more FUD gets spread.

JRPGfan said:

This might not just all be SKD related.
if its down to hardware differnces, this could be the result for the entire gen.

While thats abit early to say, its not that far fetched, going from info we currently have.

Agreed. It might not be SDK related.

I would rather not speculate and spread false information until we have enough evidence in hand to substantiate something.



--::{PC Gaming Master Race}::--