By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:
Nothing really unknown here, it's been well known for years that the Playstation 4 beats the Xbox one in regards to graphics, both fail miserably in comparison to the PC.

As for "culling" it is not a new feature, it has been used for decades, the low-hanging fruits in regards to improving performance via more efficient culling was picked long ago. Before the Xbox 360/Playstation 3 era.
Even the original Xbox had such functionality.

If you want more efficient culling though... Tiled based rendering is where it's at. You can spend more time "looking" at a tile to cut out unnecessary rendering, this is actually where the Xbox One would have an advantage, the ESRAM is nicely suited to such a task.

Culling itself is not a new feature but what Graham Wihlidal brings in is new research to using GPU compute for culling!

I've always wondered what "tile based rendering" meant. Are we talking about PowerVR tile based, immediate mode rendering GPUs with large on chip buffers or tile based light culling/shading ? 

In the first case and second case classifying GPUs that way is starting to become more outdated like the RISC vs CISC days when techniques outlined such as Graham's presentation and deferred texturing exists for immediate mode rendering GPUs to mimic more and more of that of mobile GPUs ... 

The point was... That the amount of resources you spend in order to obtain better culling has diminishing returns, the low-hanging fruit in regards to culling was picked many years ago.

You are right, tiled based rendering line is starting to be blurred as GPU's which are designed around Immediate Mode Rendering, can still do Tiled Based Rendering.
But there are still some fundamental GPU architecture differences. (I suggest you read up on the Kyro/Kyro 2 GPU's at Anandtech.) that give a native approach it's advantage.

elektranine said:
curl-6 said:
It's really quite sobering just how massively modern PC parts outperform current gen consoles.

Those time comparisons in particular are just brutal, where you have, in the tesselation chart for example, 0.70ms on PC vs 8.21ms on PS4 and 11.3ms on Xbox One.

How is this gen different than last gen, other than PlayStation being in first place, where PCs can massively outperform consoles but with single parts costing more than all the consoles combined. Or did you just not notice the power difference until now?

Last generation the consoles launched with high-end GPU's, relative to the PC.
This generation... The GPU's in the consoles were only mid-range relative to the PC.

There has always been a power difference between Console and PC, with the PC in favour, but I don't think the difference has *ever* been this catastrophically massive, which is extremely depressing as developers tend to build games for the lowest common denominator. (Consoles.)

elektranine said:

I find it interesting that even with DirectX 12 the PS4 still significantly outperforms the xbone in almost all fronts. Kept on hearing how ddr3 was supposedly superior to gddr5 in terms of latency but that's clearly not true.

Direct X 12 isn't a magic bullet.
The Xbox actually has another API which allows for closer-to-the-metal made games with even more performance than Direct X 12, which was available since the consoles launch, it's just more difficult to build stuff for.
The Playstation 4 offers something similar, it uses OpenGL (Or a variation-of) as it's high-level API and a lower-level API for maximum performance as it's closer to the metal.
The reason why Direct X 12 was championed was mostly because of the PC, the PC had no low-level API like consoles for software to be built close to the metal... Direct X 12 helps to allow the PC get some Console-like efficiency whilst still retaining a degree of abstraction.

The Xbox One wins because developers who don't have the time/resources to build their games close to the metal will see some performance advantages. (Think Indie/low budget and games relying on 3rd party engines like Unreal.)
It was never set-out to change the Xbox landscape which many people thought it would.

As for DDR3, it does have a latency edge over GDDR5, it's not stupidly massive... But it is there.
However, GPU's love bandwidth, they can hide latency really well... And Graphics is what people see instantly when they see a game running, it is what helps sell games and it is why console manufacturers tend to focus on GPU performance rather than the CPU.

The CPU will gain a small advantage from using DDR3, on top of it's already high-clock speed, the result is that tasks which rely on the CPU will have a small  advantage, think: RTS games with hundreds/thousands of units on screen, the PS4's performance will fold quicker than the Xbox One in those scenario's.
Plus the ESRAM can also give the Xbox One a farther latency edge when data needs to be retieved from RAM.

In the end though, the faster CPU isn't going to be noticed in your graphics trailers or posters at Gamestop/EB Games like a better GPU would, which ultimately helps shift more consoles/games.




www.youtube.com/@Pemalite