By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - PS4 And Xbox One GPU Performance Parameters Detailed, GDDR5 vs DRAM Benchmark Numbers

 

DICE were present at the recently concluded GDC event where the company’s Graham Wihlidal spoke about how developers can process triangles and geometry within games efficiently thereby improving performance. The presentation is mostly focused on culling, a process wherein objects of any kind such as triangles and pixels are not part of the final image. In short, a good culling process can improve performance, specially on the PS4 and Xbox One whose CPUs are already quite outdated compared to current tech. This is especially true if culling is carried out totally on the CPU although GPU assisted processes are available.

In addition to improving the culling process on current gen consoles, DICE also revealed some interesting new information about the PS4 and Xbox One along with details that are already known. It’s fairly well known at this point that the PS4 and Xbox One have similar CPU clock speed (though it’s slightly better on the Xbox One). Furthermore, we also know that the PS4 GPU has 18 compute units compared to 12 on the Xbox One which results into better ALU operations per cycle on Sony’s machine. However we were unaware of how many ALU cycles are required by each of them so that a triangle can be rendered by each GPU.

 

 

 

 

According to DICE the number of ALU cycles required by Xbox One GPU to render a triangle is 768 ALU ops/cycle and the PS4 is 1017. The PS4 also takes the lead as far as instructions that can be processed at a time goes. The Xbox One has a limit of 368 instructions at a time and the PS4 with 508. The slides then mention a number of methods of culling that be used to improve performance. However there were a couple of things that caught our eye.

 

 

The slide above clearly shows off the advantage of having a faster GDDR5 memory on the PS4 (with and without tessellation). The cull time and the actual draw time are quite faster on the PS4. It must be noted that the test case here includes rendering 443,429 triangles at a full HD resolution of 1080p.

 

 

 

Furthermore it was also revealed that Xbox One supports ExecuteIndirect a chain command from DX12 which we have talked about before. This ensures that a single batch is required for rendering which further improves performance. This also means that the API of Xbox One is already pretty close to DX12 and it’s most likely that both are now sharing similar libraries. Another interesting revelation is the custom command processor of the Xbox One with micro-code support. Generally speaking, micro-codes are used to update the processor’s BIOS or even introduce stability and security updates along with possible performance improvements. Whether it’s something that Microsoft is already using or plans to use it in the future is unknown at this point.


Read more at http://gamingbolt.com/ps4-and-xbox-one-gpu-performance-parameters-detailed-gddr5-vs-dram-benchmark-numbers-revealed#X9joPhzLeEXIZuRM.99


 

The PS5 Exists. 


Around the Network

Nothing really unknown here, it's been well known for years that the Playstation 4 beats the Xbox one in regards to graphics, both fail miserably in comparison to the PC.

As for "culling" it is not a new feature, it has been used for decades, the low-hanging fruits in regards to improving performance via more efficient culling was picked long ago. Before the Xbox 360/Playstation 3 era.
Even the original Xbox had such functionality.

If you want more efficient culling though... Tiled based rendering is where it's at. You can spend more time "looking" at a tile to cut out unnecessary rendering, this is actually where the Xbox One would have an advantage, the ESRAM is nicely suited to such a task.




www.youtube.com/@Pemalite

Pemalite said:
Nothing really unknown here, it's been well known for years that the Playstation 4 beats the Xbox one in regards to graphics, both fail miserably in comparison to the PC.

As for "culling" it is not a new feature, it has been used for decades, the low-hanging fruits in regards to improving performance via more efficient culling was picked long ago. Before the Xbox 360/Playstation 3 era.
Even the original Xbox had such functionality.

If you want more efficient culling though... Tiled based rendering is where it's at. You can spend more time "looking" at a tile to cut out unnecessary rendering, this is actually where the Xbox One would have an advantage, the ESRAM is nicely suited to such a task.

Culling itself is not a new feature but what Graham Wihlidal brings in is new research to using GPU compute for culling!

I've always wondered what "tile based rendering" meant. Are we talking about PowerVR tile based, immediate mode rendering GPUs with large on chip buffers or tile based light culling/shading ? 

In the first case and second case classifying GPUs that way is starting to become more outdated like the RISC vs CISC days when techniques outlined such as Graham's presentation and deferred texturing exists for immediate mode rendering GPUs to mimic more and more of that of mobile GPUs ... 



It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.



curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

Despite not owning either next gen consoles your being awfully harsh on them in spite of the fact that a lot of developers are giving them glowing praises so they don't seem to think that it's an issue yet ... 

The base comparison is a lot more interesting seeing as Fury X's performance advantage gets lowered if it doesn't do GPU compute culling ... 



Around the Network
fatslob-:O said:
curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

Despite not owning either next gen consoles your being awfully harsh on them in spite of the fact that a lot of developers are giving them glowing praises so they don't seem to think that it's an issue yet ... 

The base comparison is a lot more interesting seeing as Fury X's performance advantage gets lowered if it doesn't do GPU compute culling ... 

It's not so much a matter of being harsh on PS4/X1 are being impressed by what newer tech can do.

Hell, I'm still happy playing on Wii U, so I'm hardly a stickler for the latest in graphics.



Xbox one is stronger once you factor in the discrete GPU





curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

How is this gen different than last gen, other than PlayStation being in first place, where PCs can massively outperform consoles but with single parts costing more than all the consoles combined. Or did you just not notice the power difference until now?



Chevinator123 said:

 

lmao best post xD Love that movie