By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Penello specifies why X1vsPS4 performance difference is overstated

People shouldn't be arguing over specs. The difference in raw performance between the two systems is easily identified.

If you really want to make the systems "seem" on par, rather argue that a 30 - 50 % performance advantage isnt really as much as what people make it out to be.



Intel Core i7 3770K [3.5GHz]|MSI Big Bang Z77 Mpower|Corsair Vengeance DDR3-1866 2 x 4GB|MSI GeForce GTX 560 ti Twin Frozr 2|OCZ Vertex 4 128GB|Corsair HX750|Cooler Master CM 690II Advanced|

Around the Network
Jadedx said:
We will see once the console comes out but I bet MS will be right and gaf/internet psuedo-experts will get owned.


Well you do think the 360 will outsell the PS3 lifetime, so there is no bias with you is there?



Pemalite said:
Captain_Tom said:

Exactly.  He can claim that 50% more cores doesn't net you 50% more power, but he is ignoring the fact that the PS4 also has 50% more ROP's/TMU's/etc.  A matter of fact, the 7970 has double the cores of the 7850 and gues what?  It performs twice as well! 

Then add in the fact that the PS4 has WAY more bandwidth and hUMA, and it is easy to see how it will perform twice as well like some developers have directly suggested.  Get your heads out of the cloud people...


It almost has 50%+ of everything, except for a few things like the Geometry Engines, which is going to be a big part for next generation, everything will have depth, hopefully no more flat blurry ground.

The bandwidth advantage of the PS4 isn't as big as you think either, the Xbox One has lower bandwidth requirements to begin with due to the slower GPU, the eSRAM will give it that little extra boost.
Of course ideally, Microsoft should have went with GDDR5, but probably due to immediate costs (And possibly CPU performance due to the roughly 20% added latency?), decided against it, GDDR5 doesn't enjoy the scale of economies like DDR3 does and it also requires a more complex memory controller, which costs transisters, the transister budget that could have been spent on the memory controller and GPU was pretty much all thrown at the eSRAM and then some.

On the flip side, once low-end GPU's and IGP's start using GDDR5, then it's going to be good news for Sony, it's going to get cheaper, high-end cards don't really sell much in terms of volume, so their shift to GDDR6 won't impact prices much.
Where-as DDR3 is going to be getting more costly from here-on-out, DDR3 prices have already increased over the past year, that cost should jump for Microsoft as the PC shifts it's focus to DDR4 production.

Exactly what I was thinking.  People act like DDR3 was cheaper, but in the long run it could easily end up costing more.  Especially since they tried to make up for it with costly ESRAM...



JEMC said:
prayformojo said:
JEMC said:
prayformojo said:
Here is what I believe. The PS2 was a nightmare to code for and the UI was basically, nada. The Xbox, on the other hand, was easy to code for and had a dashboard, friends list, voice messaging, XBL, HDD, customized sound tracks during games etc. The PS3 was a nightmare to code for and the UI was very poor due to Sony not giving the machine enough OS ram. The Xbox 360, well, you get where I'm going.

I think the Xbone will have a better UI and it will run smoother because MS has been creating software since most of you weren't even alive. Sony's strength has never been software and I doubt it ever will. That doesn't mean PS4 won't have the best games etc, but if you're looking for quick and efficient apps, UI performance and integration across all things (smart phones, internet etc), you will want to own an Xbone.

Too bad most of us want a console to play games, not to look at the UI and how well it runs.


So do I but nothing, and I mean NOTHING is worse than a sluggish or clunky UI and OS imo.

But you/we must deal with the UI for what, 20-30 min when we buy and configure the console and once it's all set, less than 2 min. if we only want to play? That's nothing compared to the hours we'll spend gaming.

I understand that for you it's important, but for a lot of us it's not a big deal. Personally I don't care, and I have a WiiU with a slow UI...


But the UI and OS extends into everything we do. This isn't the PS1 days where you pop in a game, see a PS logo and that's it. Now, we rely on multitasking and doing more than one thing at a time. Lets say you're in a game, and a friend comes online, and you want to see what he/she is doing? On the 360, you push the guide button and BAM, everything is right there. On PS3, when you try to do this, it takes like 5 seconds just to load the damn thing. Want to buy a game on 360? You just scroll up and to the right to games. On PS3, you had to load up an APP, and then wait for 15 second just to get into the store. Once you were IN the store? Then you have to wait while things load. It's laggy, and annoying. What about when you get an achievement, and you want to view what it is? On 360, it's seemless, quick and easy. On PS3, again, you have to wait, and wait, then sync...let the thing load, THEN you finally get to see the trophy you unlocked. It's just, so unbearable.

Now, the PS3 definitely has the better library and imo, won this gen in that department, but for me? The UI and OS was just so damn bad, that it ruined half the experience. If this were the old days where our consoles just played games, none of this would matter, but times have change and I expect more.



Captain_Tom said:
Pemalite said:
Captain_Tom said:

Exactly.  He can claim that 50% more cores doesn't net you 50% more power, but he is ignoring the fact that the PS4 also has 50% more ROP's/TMU's/etc.  A matter of fact, the 7970 has double the cores of the 7850 and gues what?  It performs twice as well! 

Then add in the fact that the PS4 has WAY more bandwidth and hUMA, and it is easy to see how it will perform twice as well like some developers have directly suggested.  Get your heads out of the cloud people...


It almost has 50%+ of everything, except for a few things like the Geometry Engines, which is going to be a big part for next generation, everything will have depth, hopefully no more flat blurry ground.

The bandwidth advantage of the PS4 isn't as big as you think either, the Xbox One has lower bandwidth requirements to begin with due to the slower GPU, the eSRAM will give it that little extra boost.
Of course ideally, Microsoft should have went with GDDR5, but probably due to immediate costs (And possibly CPU performance due to the roughly 20% added latency?), decided against it, GDDR5 doesn't enjoy the scale of economies like DDR3 does and it also requires a more complex memory controller, which costs transisters, the transister budget that could have been spent on the memory controller and GPU was pretty much all thrown at the eSRAM and then some.

On the flip side, once low-end GPU's and IGP's start using GDDR5, then it's going to be good news for Sony, it's going to get cheaper, high-end cards don't really sell much in terms of volume, so their shift to GDDR6 won't impact prices much.
Where-as DDR3 is going to be getting more costly from here-on-out, DDR3 prices have already increased over the past year, that cost should jump for Microsoft as the PC shifts it's focus to DDR4 production.

Exactly what I was thinking.  People act like DDR3 was cheaper, but in the long run it could easily end up costing more.  Especially since they tried to make up for it with costly ESRAM...

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi). 

In PC CPU benchmarks, BW always wins over latency, if you increase the speed of your memory, you increase bandwidth and gain fps, even if you have to loosen the timings. GTA memory benchmarks are a good example

 

Even if you go from 1600mhz at 7-7-7-19 to 1866 at slower timings 9-9-9-24 you gain fps from the CPU side of things, so now let's imagine going from 2133 Mhz to 5500 Mhz, with equal or even worse timings..it's still gonna be faster. Modern CPU are also highly parrallel in multi core configuration, timings are becoming less and less important, like on the gpu side, BW is everything.



Around the Network
petalpusher said:

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)


The Xbox One wouldn't have a DDR3+DDR4 memory controller, that costs transisters, the transister budget for the Xbox One's APU is already massive, DDR4 memory controllers would also need more transisters than a DDR3 memory controller.
It's why AMD ditched the DDR2+DDR3 memory controller after the Phenom 2, why waste transisters when that could be used for more cache or another core?

petalpusher said:

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi).


GDDR latency in terms of latency per-clockrate has never been good, GPU's tend not to be latency sensitive, it makes a big difference having low latency memory when a CPU has a stall.
This is also why Enthusiasts have always pushed for lower latency memory on AMD platforms, AMD's predictor is incredibly poor compared to Intels, so when it has a cache miss-hit and thus has to travel all the way down to system Ram... It receives a performance penalty.
Granted, newer (Higher-end) CPU's from AMD won't see much difference, because well. They have large and fast caches, Kabini/Jaguar is a completely different beast however, essentially in plain terms, they're simple "cheap crap.".
Intel has done allot of work on that front since the Core 2 days, so using them as an example is always bad form.

Also, keep in mind, there won't be as much buffering going on in the consoles as there is on PC in various stages, for multiple reasons, which also hides latency.

And really despite timings increasing over the years for system Ram (Which is to keep up with the clockrate), latency hasn't actually changed.
Grab some DDR3 1600mhz memory, that's 800mhz IO, which has a typical CAS latency of 8, that means it has a latency of 10ns.
Grab some DDR2 800mhz memory, that's 400mhz IO, which has a typical CAS latency of 4, this is also 10ns.

Now with GDDR5 the data rates are 4x faster than the IO clock instead of 2x, I.E. 5ghz GDDR5 is 1.25ghz x4 and would have a CAS Latency of 15.
15/(1.25 GHz) = 12 ns

If you were to benchmark RAM with 10ns memory and 12ns, you will notice a difference, synthetics usually show larger differences than what really happens  in real world applications, however, there still is a difference.


So essentially, if the CPU is being utilised fully, which we should assume in a fixed hardware environment, the performance delta will be bigger on the consoles than what you see on the PC, where large portions of a CPU might actually go without being fully utilised.
GPU's it's no big deal.
In the end, we already knew the Xbox One has a faster CPU than the Playstation 4 anyway, the difference for next generation however is that, it's simply not going to matter, the Playstation 4's GPU has extra resources to offload some of the more parallel tasks the CPU might handle anyway.



--::{PC Gaming Master Race}::--

About the DDR4, we will see, it's a different case than DDR2/3, the difference are not so massive between the two standards and AMD already made a two standard controller in one of their GPU, The HD7770 can accept DDR3 (for very cheap entry level cards) and GDDR5 at the same time. The X1 memory controller is indeed different than the one on 7770's, 128 bit to 256 bit (to negate BW difference with DDR3).

On latency, you are right but the only problem is that you are assuming same speed, ps4 ram (and for cpu too since it's the same pool) is more than twice as fast. Base clock are 1066 (x2) vs 1375 (x4), so you end up with 2133 vs 5500 effective, even a 20% loss in latency will be easily recovered for latency intensive operations.
It's really again a non issue in my opinion, i would not be surprised at all, if between the two consoles, the one with GDDR5 will be the fastest on cpu code, as well.

Latest GDDR5 chip from Samsung/hynix seems to be 7ns latency, so thoses chips are really good at it. GDDR5 developpement keeps going, while DDR3 is waiting for DDR4.




Exactly what I was thinking.  People act like DDR3 was cheaper, but in the long run it could easily end up costing more.  Especially since they tried to make up for it with costly ESRAM...

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi). 

In PC CPU benchmarks, BW always wins over latency, if you increase the speed of your memory, you increase bandwidth and gain fps, even if you have to loosen the timings. GTA memory benchmarks are a good example

 

Even if you go from 1600mhz at 7-7-7-19 to 1866 at slower timings 9-9-9-24 you gain fps from the CPU side of things, so now let's imagine going from 2133 Mhz to 5500 Mhz, with equal or even worse timings..it's still gonna be faster. Modern CPU are also highly parrallel in multi core configuration, timings are becoming less and less important, like on the gpu side, BW is everything.

OMG thank you again!  It has been proven many times that higher bandwidth trumps lower latency by a considerable amount.  This latency advantage is like most other things MS has been saying:  BS.



petalpusher said:

It's really again a non issue in my opinion, i would not be surprised at all, if between the two consoles, the one with GDDR5 will be the fastest on cpu code, as well


That makes no sense.
The Xbox One's CPU is clocked higher and has lower latency memory, CPU's aren't exactly bandwidth hungry, especially low-end ones found in the consoles, so please. Do enlighten me on how you came to such a conclusion.



--::{PC Gaming Master Race}::--

Again take a look at GTA 4 cpu benchmarks. Doubling memory bandwidth gives you almost 50% increase in performance. So when you say "cpu aren't exactly bandwidth hungry" how do you come to such conclusion?

Show me one cpu game benchmarks on memory where better latencies increase your performance while the bandwidth is significantly lower. I don't think the latencies will be very different between the two sets of ram, but let's pretend it could be for the sake of an hypothetical demonstration.