By using this site, you agree to our Privacy Policy and our Terms of Use. Close
elektranine said:

 Yes that was true back then as it was true this gen.

Wow. You seriously believe that? I knew the demographic on this forum leaned in a particular direction... But heck. You are the perfect example.

When the Xbox 360 launched in late 2005, it featured a GPU which was related to the Radeon x1900 GPU released around the same time, granted it had lower clocks and it's ROP's were cut in half... But it did feature a few improvements which would be later released in the 2900 series such as unified shaders.

The Xbox 360 had a high-end GPU, there were only a couple of PC GPU's that would be able to beat it on release. (Ignoring Crossfire.) and even then it wasn't a significant difference of 50% or more.

The Playstation 3 had the RSX, the closest PC GPU was the Geforce 7800GTX, but with half the rops and differing clocks, again only a few PC GPU's could beat it, and even then not by much. (Ignoring Dual-GPU's like the GX2.)

The Xbox One has a Radeon 7750 derived GPU, the Playstation 4 uses a Radeon 7870 derived GPU but with performance inline of the Radeon 7850.

The consoles also launched in late 2013, the Radeons they are based upon were released in early 2012, 20 months prior to the consoles being launched.
However...  The Playstation 4 had a "mid range" GPU because SINGLE GPU's were twice as fast on the PC, aka the Radeon 7970 GHZ edition.

But... It doesn't end there.
A month or two BEFORE the Playstation 4 launched... AMD released the Radeon 290X, which increased the lead even more, for the first time ever in console history... A console launched and the PC had GPU's which were 3x faster. With theoretical performance of 12x or more if you count Crossfire. Ouch.

So am I wrong? Far from it.

elektranine said:


What APU was more powerful than the PS4's APU in 2013? Seriously I want links. You are wrong.


The PC doesn't need APU's, it has them, but it doesn't need them, but if you want PS4 levels of GPU performance, you need to go Discreet.
APU's don't walk on water and cure cancer... It doesn't explain the answer of the universe either.

The PC has something better. - Discreet hardware.
At every feature size, say 28nm you have an "optimal" amount of transisters you can spend before things like yields and heat/power consumption start to get in the way, with an APU you are stressed for resources as the CPU and GPU need to share a fixed budget.

The bonus of an APU though is cost, perfect for a cheap cost sensitive device as you don't need to buy and package a heap of different chips and you can have minimal PCB layers due to less traces.

But hey, if you think the PS4 could beat a high-end PC in 2013... I want what you are having.

elektranine said:


Its always about the money and console gamers tend to spend more on software then PC gamers.


Whatever. lol
http://venturebeat.com/2015/04/22/video-games-will-make-91-5b-this-year/
http://www.gamesindustry.biz/articles/2016-01-26-pc-trumps-mobile-console-in-booming-usd61bn-digital-games-market
https://opengamingalliance.org/press/details/global-game-software-market-forecasted-to-reach-100-billion-in-2019

They all say otherwise.

elektranine said:


 (Not really as devs will not want to limit themselves to 1-2 platforms. PS4 currently supports OpenGL/ES, DirectX 9-11.2 so most games will target OpenGL or DirectX10 to target all platforms.)

What? You make no sense.

The Playstation 4 does not have the Direct X API. No playstation console has and no playstation console ever will.
It's hardware is compatible with the Direct X 11+ feature set, but it doesn't have the software side of the equation and not even the drivers in the console would have Direct X calls anyway.

Game engines also tend to support a myriad of API's.
For example... Unreal Engine 4 supports Direct X 10, 11, 12, Vulkan, Open GL 3.3+ and above all at the same time and can switch between them all. But on the PS4 it will never be able to use Direct X for obvious reasons.

If you can find proof that the Playstation 4 uses the Direct X API... I will eat my hat, otherwise... It's rubbish.

elektranine said:


 (Wrong again. This and other benchmarks prove that the PS4 has significantly less memory latency than the xbone. The xbone memory controller is slower, GDDR5 is the winner here there is no debate. The numbers don't lie. In many cases PS4 latency is only 50% of xbone.)


You obviously have no idea how memory latency is calculated. - I did the math on this for everyone a few years ago, which is 100% accurate, here it is again.

Latency for RAM is calculated in terms of clockrate, so with that in mind... Here is some examples.
DDR3 1600mhz memory, that's 800mhz IO, which has a typical CAS latency of 8, that means it has a latency of 10ns.
DDR2 800mhz memory, that's 400mhz IO, which has a typical CAS latency of 4, this is also 10ns.

Now with GDDR5 the data rates are 4x faster than the IO clock instead of 2x, For example: 5ghz GDDR5 is 1.25ghz x4 and would have a CAS Latency of 15.
15/(1.25 GHz) = 12 ns

Yes. The Xbox One has less bandwidth, that isn't up for debate, why bring it up when I reinforced that point beats me... But one thing is for sure, the Xbox One does have a 20% advantage in RAM latency alone, with that said... If you were to peruse the likes of Anandtech or Toms Hardware or any other PC tech website and look at RAM benchmarks... You will see that Latency usually has a very minimal effect on gaming as Intel have a myriad of technologies to hide latency. - I suggest you do some reading up on Intel's tech they used in the Core 2 processors to hide latency.

elektranine said:

(Can you link to any siggraph papers that support your claims?)


(Can you prove that?)

(So a seperate memory module can reduce the latency of the main system RAM? I'm sure you have some interesting research in that area.)

(No its about better performance. The CPU based world is pretty much over. As a computer scientist I have noticed a massive shift away from CPU to a focus on exploiting GPU power. CPUs have peaked and they wont get much faster in the future, that's why Sony decided to focus on GPU tech.)

I have lumped it all together because you are just picking at it and it's all essentially the same anyway.

Now. ESRAM is not a "seperate memory module" it's essentially a cache.

Now... Hang on a moment whilst I educate you on caches..
The main reason for the existence of caches is so that the CPU/GPU are not tasked to wander all the way down to RAM to fetch Data, why?
Because RAM has less bandwidth and higher latency than L1/L2/L3/L4-ESRAM/EDRAM.
When a CPU/GPU cannot find the data they want in the L1, they go to the slower and higher latency L2 and so on.
If the data the CPU or GPU wants is in the ESRAM, it will access it there over system RAM because it's a faster, lower latency cache, otherwise what would be the point in having ESRAM? If it was just as fast and had the same latency as regular RAM, they wouldn't bother including it.

As you know, developers actually do have a surprising amount of control over what and when data goes into the ESRAM/EDRAM if they so desire, does it mean everything the CPU/GPU want's will be ESRAM? Well. There are no guarentees, not every game is the same.

Here is some information on how ESRAM/EDRAM can be used: http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3


You are right that CPU performance has "peaked" all the largest gains to increase performance have already been taken... But. That's for Intel.

AMD CPU's however are pathetic in terms of performance.
All 8 Jaguar cores in the Playstation 4 would be roughly equivalent to a Dual-Core Core i3 Haswell at about 3ghz, remember that AMD's fastest struggle against even Intels' low-end and mid-range parts, AMD's lowest-end parts are even more laughable.

I would like for all console manufacturers to take CPU performance seriously for once, but graphics is what sells, people like their shiny things.

fatslob-:O said:

Has it ? If we look in the case of tessellation they just lowered a rendering time of 19.3ms to 11.2ms! (That's almost a 2x difference in performance.)

The native tile based GPU appreach has the advantage of using no ALU and no bandwidth cost. The first advantage is practically irrelevant seeing as how ALU is cheap these days and the second advantage could be one but it's not since tile based GPUs are known to struggle with higher resolution meshes ... 

Need more data than that to see how much they actually managed to cull though.




www.youtube.com/@Pemalite