By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - PS4 And Xbox One GPU Performance Parameters Detailed, GDDR5 vs DRAM Benchmark Numbers

Everyone already knew the X1 was the weakest, this isn't 2013 xD



Around the Network
fatslob-:O said:
Pemalite said:
Nothing really unknown here, it's been well known for years that the Playstation 4 beats the Xbox one in regards to graphics, both fail miserably in comparison to the PC.

As for "culling" it is not a new feature, it has been used for decades, the low-hanging fruits in regards to improving performance via more efficient culling was picked long ago. Before the Xbox 360/Playstation 3 era.
Even the original Xbox had such functionality.

If you want more efficient culling though... Tiled based rendering is where it's at. You can spend more time "looking" at a tile to cut out unnecessary rendering, this is actually where the Xbox One would have an advantage, the ESRAM is nicely suited to such a task.

Culling itself is not a new feature but what Graham Wihlidal brings in is new research to using GPU compute for culling!

I've always wondered what "tile based rendering" meant. Are we talking about PowerVR tile based, immediate mode rendering GPUs with large on chip buffers or tile based light culling/shading ? 

In the first case and second case classifying GPUs that way is starting to become more outdated like the RISC vs CISC days when techniques outlined such as Graham's presentation and deferred texturing exists for immediate mode rendering GPUs to mimic more and more of that of mobile GPUs ... 

The point was... That the amount of resources you spend in order to obtain better culling has diminishing returns, the low-hanging fruit in regards to culling was picked many years ago.

You are right, tiled based rendering line is starting to be blurred as GPU's which are designed around Immediate Mode Rendering, can still do Tiled Based Rendering.
But there are still some fundamental GPU architecture differences. (I suggest you read up on the Kyro/Kyro 2 GPU's at Anandtech.) that give a native approach it's advantage.

elektranine said:
curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

How is this gen different than last gen, other than PlayStation being in first place, where PCs can massively outperform consoles but with single parts costing more than all the consoles combined. Or did you just not notice the power difference until now?

Last generation the consoles launched with high-end GPU's, relative to the PC.
This generation... The GPU's in the consoles were only mid-range relative to the PC.

There has always been a power difference between Console and PC, with the PC in favour, but I don't think the difference has *ever* been this catastrophically massive, which is extremely depressing as developers tend to build games for the lowest common denominator. (Consoles.)

elektranine said:

I find it interesting that even with DirectX 12 the PS4 still significantly outperforms the xbone in almost all fronts. Kept on hearing how ddr3 was supposedly superior to gddr5 in terms of latency but that's clearly not true.

Direct X 12 isn't a magic bullet.
The Xbox actually has another API which allows for closer-to-the-metal made games with even more performance than Direct X 12, which was available since the consoles launch, it's just more difficult to build stuff for.
The Playstation 4 offers something similar, it uses OpenGL (Or a variation-of) as it's high-level API and a lower-level API for maximum performance as it's closer to the metal.
The reason why Direct X 12 was championed was mostly because of the PC, the PC had no low-level API like consoles for software to be built close to the metal... Direct X 12 helps to allow the PC get some Console-like efficiency whilst still retaining a degree of abstraction.

The Xbox One wins because developers who don't have the time/resources to build their games close to the metal will see some performance advantages. (Think Indie/low budget and games relying on 3rd party engines like Unreal.)
It was never set-out to change the Xbox landscape which many people thought it would.

As for DDR3, it does have a latency edge over GDDR5, it's not stupidly massive... But it is there.
However, GPU's love bandwidth, they can hide latency really well... And Graphics is what people see instantly when they see a game running, it is what helps sell games and it is why console manufacturers tend to focus on GPU performance rather than the CPU.

The CPU will gain a small advantage from using DDR3, on top of it's already high-clock speed, the result is that tasks which rely on the CPU will have a small  advantage, think: RTS games with hundreds/thousands of units on screen, the PS4's performance will fold quicker than the Xbox One in those scenario's.
Plus the ESRAM can also give the Xbox One a farther latency edge when data needs to be retieved from RAM.

In the end though, the faster CPU isn't going to be noticed in your graphics trailers or posters at Gamestop/EB Games like a better GPU would, which ultimately helps shift more consoles/games.




www.youtube.com/@Pemalite

Pemalite said:

The point was... That the amount of resources you spend in order to obtain better culling has diminishing returns, the low-hanging fruit in regards to culling was picked many years ago.

You are right, tiled based rendering line is starting to be blurred as GPU's which are designed around Immediate Mode Rendering, can still do Tiled Based Rendering.
But there are still some fundamental GPU architecture differences. (I suggest you read up on the Kyro/Kyro 2 GPU's at Anandtech.) that give a native approach it's advantage.

Has it ? If we look in the case of tessellation they just lowered a rendering time of 19.3ms to 11.2ms! (That's almost a 2x difference in performance.)

The native tile based GPU appreach has the advantage of using no ALU and no bandwidth cost. The first advantage is practically irrelevant seeing as how ALU is cheap these days and the second advantage could be one but it's not since tile based GPUs are known to struggle with higher resolution meshes ... 


fatslob-:O said:
elektranine said:

I find it interesting that even with DirectX 12 the PS4 still significantly outperforms the xbone in almost all fronts. Kept on hearing how ddr3 was supposedly superior to gddr5 in terms of latency but that's clearly not true.

Cause Sony has the more lower level API according to reports and more GPU power which will cement it's lead against the Xbox One, is that the answer you were looking for ?

Well I kept on hearing from the media and other people about how DirectX 12 was supposed to be some revolution and that the Xbox One was just being held back by inferior APIs. I mean even with these benchmark results some people on other forums are still chanting "just wait for the first full fledged DX12 game". Some people just ignore reality. I mean there is fans that will always support their favorite no matter what but there were also people in the gaming media actually encouraging this line of thought.



Bandorr said:
Odd comparison. The fury X came out in 2015. Googling also seems to suggest it cost around 650. So it is more than both consoles combined.

If they wanted to include the fury x, I would have liked them to include one other. An average 2013 video card. One that most people would have had when these consoles came out.

GTX 680 would have been a better comparison. It came out in 2012 but sold 10m units so it was very popular at the time and could be found relatively cheap as it was already a generation old. Still would have smoked both consoles, just not by as much.



Around the Network
Pemalite said:
elektranine said:

How is this gen different than last gen, other than PlayStation being in first place, where PCs can massively outperform consoles but with single parts costing more than all the consoles combined. Or did you just not notice the power difference until now?

Last generation the consoles launched with high-end GPU's, relative to the PC. Yes that was true back then as it was true this gen.
This generation... The GPU's in the consoles were only mid-range relative to the PC. What APU was more powerful than the PS4's APU in 2013? Seriously I want links. You are wrong.

There has always been a power difference between Console and PC, with the PC in favour, but I don't think the difference has *ever* been this catastrophically massive, which is extremely depressing as developers tend to build games for the lowest common denominator. (Consoles.) Its always about the money and console gamers tend to spend more on software then PC gamers.

elektranine said:

I find it interesting that even with DirectX 12 the PS4 still significantly outperforms the xbone in almost all fronts. Kept on hearing how ddr3 was supposedly superior to gddr5 in terms of latency but that's clearly not true.

Direct X 12 isn't a magic bullet.
The Xbox actually has another API which allows for closer-to-the-metal made games with even more performance than Direct X 12, which was available since the consoles launch, it's just more difficult to build stuff for.
The Playstation 4 offers something similar, it uses OpenGL (Or a variation-of) as it's high-level API and a lower-level API for maximum performance as it's closer to the metal.
The reason why Direct X 12 was championed was mostly because of the PC, the PC had no low-level API like consoles for software to be built close to the metal... Direct X 12 helps to allow the PC get some Console-like efficiency whilst still retaining a degree of abstraction.

The Xbox One wins because developers who don't have the time/resources to build their games close to the metal will see some performance advantages. (Think Indie/low budget and games relying on 3rd party engines like Unreal.)
It was never set-out to change the Xbox landscape which many people thought it would. (Not really as devs will not want to limit themselves to 1-2 platforms. PS4 currently supports OpenGL/ES, DirectX 9-11.2 so most games will target OpenGL or DirectX10 to target all platforms.)

As for DDR3, it does have a latency edge over GDDR5, it's not stupidly massive... But it is there. (Wrong again. This and other benchmarks prove that the PS4 has significantly less memory latency than the xbone. The xbone memory controller is slower, GDDR5 is the winner here there is no debate. The numbers don't lie. In many cases PS4 latency is only 50% of xbone.)

However, GPU's love bandwidth, they can hide latency really well (Can you link to any siggraph papers that support your claims?)... And Graphics is what people see instantly when they see a game running, it is what helps sell games and it is why console manufacturers tend to focus on GPU performance rather than the CPU.

The CPU will gain a small advantage from using DDR3, on top of it's already high-clock speed, the result is that tasks which rely on the CPU will have a small  advantage, think: RTS games with hundreds/thousands of units on screen, the PS4's performance will fold quicker than the Xbox One in those scenario's. (Can you prove that?)
Plus the ESRAM can also give the Xbox One a farther latency edge when data needs to be retieved from RAM. (So a seperate memory module can reduce the latency of the main system RAM? I'm sure you have some interesting research in that area.)

In the end though, the faster CPU isn't going to be noticed in your graphics trailers or posters at Gamestop/EB Games like a better GPU would, which ultimately helps shift more consoles/games. (No its about better performance. The CPU based world is pretty much over. As a computer scientist I have noticed a massive shift away from CPU to a focus on exploiting GPU power. CPUs have peaked and they wont get much faster in the future, that's why Sony decided to focus on GPU tech.)

Perfect example of people taking liberties with the facts.



elektranine said:

 Yes that was true back then as it was true this gen.

Wow. You seriously believe that? I knew the demographic on this forum leaned in a particular direction... But heck. You are the perfect example.

When the Xbox 360 launched in late 2005, it featured a GPU which was related to the Radeon x1900 GPU released around the same time, granted it had lower clocks and it's ROP's were cut in half... But it did feature a few improvements which would be later released in the 2900 series such as unified shaders.

The Xbox 360 had a high-end GPU, there were only a couple of PC GPU's that would be able to beat it on release. (Ignoring Crossfire.) and even then it wasn't a significant difference of 50% or more.

The Playstation 3 had the RSX, the closest PC GPU was the Geforce 7800GTX, but with half the rops and differing clocks, again only a few PC GPU's could beat it, and even then not by much. (Ignoring Dual-GPU's like the GX2.)

The Xbox One has a Radeon 7750 derived GPU, the Playstation 4 uses a Radeon 7870 derived GPU but with performance inline of the Radeon 7850.

The consoles also launched in late 2013, the Radeons they are based upon were released in early 2012, 20 months prior to the consoles being launched.
However...  The Playstation 4 had a "mid range" GPU because SINGLE GPU's were twice as fast on the PC, aka the Radeon 7970 GHZ edition.

But... It doesn't end there.
A month or two BEFORE the Playstation 4 launched... AMD released the Radeon 290X, which increased the lead even more, for the first time ever in console history... A console launched and the PC had GPU's which were 3x faster. With theoretical performance of 12x or more if you count Crossfire. Ouch.

So am I wrong? Far from it.

elektranine said:


What APU was more powerful than the PS4's APU in 2013? Seriously I want links. You are wrong.


The PC doesn't need APU's, it has them, but it doesn't need them, but if you want PS4 levels of GPU performance, you need to go Discreet.
APU's don't walk on water and cure cancer... It doesn't explain the answer of the universe either.

The PC has something better. - Discreet hardware.
At every feature size, say 28nm you have an "optimal" amount of transisters you can spend before things like yields and heat/power consumption start to get in the way, with an APU you are stressed for resources as the CPU and GPU need to share a fixed budget.

The bonus of an APU though is cost, perfect for a cheap cost sensitive device as you don't need to buy and package a heap of different chips and you can have minimal PCB layers due to less traces.

But hey, if you think the PS4 could beat a high-end PC in 2013... I want what you are having.

elektranine said:


Its always about the money and console gamers tend to spend more on software then PC gamers.


Whatever. lol
http://venturebeat.com/2015/04/22/video-games-will-make-91-5b-this-year/
http://www.gamesindustry.biz/articles/2016-01-26-pc-trumps-mobile-console-in-booming-usd61bn-digital-games-market
https://opengamingalliance.org/press/details/global-game-software-market-forecasted-to-reach-100-billion-in-2019

They all say otherwise.

elektranine said:


 (Not really as devs will not want to limit themselves to 1-2 platforms. PS4 currently supports OpenGL/ES, DirectX 9-11.2 so most games will target OpenGL or DirectX10 to target all platforms.)

What? You make no sense.

The Playstation 4 does not have the Direct X API. No playstation console has and no playstation console ever will.
It's hardware is compatible with the Direct X 11+ feature set, but it doesn't have the software side of the equation and not even the drivers in the console would have Direct X calls anyway.

Game engines also tend to support a myriad of API's.
For example... Unreal Engine 4 supports Direct X 10, 11, 12, Vulkan, Open GL 3.3+ and above all at the same time and can switch between them all. But on the PS4 it will never be able to use Direct X for obvious reasons.

If you can find proof that the Playstation 4 uses the Direct X API... I will eat my hat, otherwise... It's rubbish.

elektranine said:


 (Wrong again. This and other benchmarks prove that the PS4 has significantly less memory latency than the xbone. The xbone memory controller is slower, GDDR5 is the winner here there is no debate. The numbers don't lie. In many cases PS4 latency is only 50% of xbone.)


You obviously have no idea how memory latency is calculated. - I did the math on this for everyone a few years ago, which is 100% accurate, here it is again.

Latency for RAM is calculated in terms of clockrate, so with that in mind... Here is some examples.
DDR3 1600mhz memory, that's 800mhz IO, which has a typical CAS latency of 8, that means it has a latency of 10ns.
DDR2 800mhz memory, that's 400mhz IO, which has a typical CAS latency of 4, this is also 10ns.

Now with GDDR5 the data rates are 4x faster than the IO clock instead of 2x, For example: 5ghz GDDR5 is 1.25ghz x4 and would have a CAS Latency of 15.
15/(1.25 GHz) = 12 ns

Yes. The Xbox One has less bandwidth, that isn't up for debate, why bring it up when I reinforced that point beats me... But one thing is for sure, the Xbox One does have a 20% advantage in RAM latency alone, with that said... If you were to peruse the likes of Anandtech or Toms Hardware or any other PC tech website and look at RAM benchmarks... You will see that Latency usually has a very minimal effect on gaming as Intel have a myriad of technologies to hide latency. - I suggest you do some reading up on Intel's tech they used in the Core 2 processors to hide latency.

elektranine said:

(Can you link to any siggraph papers that support your claims?)


(Can you prove that?)

(So a seperate memory module can reduce the latency of the main system RAM? I'm sure you have some interesting research in that area.)

(No its about better performance. The CPU based world is pretty much over. As a computer scientist I have noticed a massive shift away from CPU to a focus on exploiting GPU power. CPUs have peaked and they wont get much faster in the future, that's why Sony decided to focus on GPU tech.)

I have lumped it all together because you are just picking at it and it's all essentially the same anyway.

Now. ESRAM is not a "seperate memory module" it's essentially a cache.

Now... Hang on a moment whilst I educate you on caches..
The main reason for the existence of caches is so that the CPU/GPU are not tasked to wander all the way down to RAM to fetch Data, why?
Because RAM has less bandwidth and higher latency than L1/L2/L3/L4-ESRAM/EDRAM.
When a CPU/GPU cannot find the data they want in the L1, they go to the slower and higher latency L2 and so on.
If the data the CPU or GPU wants is in the ESRAM, it will access it there over system RAM because it's a faster, lower latency cache, otherwise what would be the point in having ESRAM? If it was just as fast and had the same latency as regular RAM, they wouldn't bother including it.

As you know, developers actually do have a surprising amount of control over what and when data goes into the ESRAM/EDRAM if they so desire, does it mean everything the CPU/GPU want's will be ESRAM? Well. There are no guarentees, not every game is the same.

Here is some information on how ESRAM/EDRAM can be used: http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3


You are right that CPU performance has "peaked" all the largest gains to increase performance have already been taken... But. That's for Intel.

AMD CPU's however are pathetic in terms of performance.
All 8 Jaguar cores in the Playstation 4 would be roughly equivalent to a Dual-Core Core i3 Haswell at about 3ghz, remember that AMD's fastest struggle against even Intels' low-end and mid-range parts, AMD's lowest-end parts are even more laughable.

I would like for all console manufacturers to take CPU performance seriously for once, but graphics is what sells, people like their shiny things.

fatslob-:O said:

Has it ? If we look in the case of tessellation they just lowered a rendering time of 19.3ms to 11.2ms! (That's almost a 2x difference in performance.)

The native tile based GPU appreach has the advantage of using no ALU and no bandwidth cost. The first advantage is practically irrelevant seeing as how ALU is cheap these days and the second advantage could be one but it's not since tile based GPUs are known to struggle with higher resolution meshes ... 

Need more data than that to see how much they actually managed to cull though.




www.youtube.com/@Pemalite

elektranine said:
curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

How is this gen different than last gen, other than PlayStation being in first place, where PCs can massively outperform consoles but with single parts costing more than all the consoles combined. Or did you just not notice the power difference until now?

Well Xbox 360 and PS3 were really powerfull when they released thanks to pc devs like epic games pushing companies like MS to make the console strong. Even pc magazines were impressed and the first year(s) their was hardly any talk about those consoles performances untill crysis but that was a beast of machine to play it.    With PS4/Xbox one it feels different you don't need a beast of a machine like you did back in time for crysis and the tagline 1080p 60fps is for the gamers was comical because that's what pc gamers considered as the bare minimum, we are only a few years in this generation and their have a bunch of games released on both consoles that don't do that bare minimum while playing those games on 1080 60fps don't need a beast of a machine they only one if they want to play thos 4k games.   Just ask somebody who got Rise of the Tomb Raider on pc. personaly I don't care so much about graphics and power but their are other limitations that consoles 'still' have that bother me so much...






Pemalite said:

Need more data than that to see how much they actually managed to cull though.

In ALL cases of of their testing, DICE was able to get speed up on EVERYTHING! In the base case without tessellation they were able to get a speed up of 20% while culling 78% of the triangles of what looks to be a modified scene from the Winter Palace in Dragon Age: Inquisition ...

Even more so for the Fury X since it's heavily skewed in it's compute/fixed function ratio ... 



curl-6 said:
fatslob-:O said:

Then what is it about ?

Read the part of the sentence that you didn't bold.

I don't understand people, sometimes lol :)