By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Penello specifies why X1vsPS4 performance difference is overstated

Pemalite said:
Captain_Tom said:

Exactly.  He can claim that 50% more cores doesn't net you 50% more power, but he is ignoring the fact that the PS4 also has 50% more ROP's/TMU's/etc.  A matter of fact, the 7970 has double the cores of the 7850 and gues what?  It performs twice as well! 

Then add in the fact that the PS4 has WAY more bandwidth and hUMA, and it is easy to see how it will perform twice as well like some developers have directly suggested.  Get your heads out of the cloud people...


It almost has 50%+ of everything, except for a few things like the Geometry Engines, which is going to be a big part for next generation, everything will have depth, hopefully no more flat blurry ground.

The bandwidth advantage of the PS4 isn't as big as you think either, the Xbox One has lower bandwidth requirements to begin with due to the slower GPU, the eSRAM will give it that little extra boost.
Of course ideally, Microsoft should have went with GDDR5, but probably due to immediate costs (And possibly CPU performance due to the roughly 20% added latency?), decided against it, GDDR5 doesn't enjoy the scale of economies like DDR3 does and it also requires a more complex memory controller, which costs transisters, the transister budget that could have been spent on the memory controller and GPU was pretty much all thrown at the eSRAM and then some.

On the flip side, once low-end GPU's and IGP's start using GDDR5, then it's going to be good news for Sony, it's going to get cheaper, high-end cards don't really sell much in terms of volume, so their shift to GDDR6 won't impact prices much.
Where-as DDR3 is going to be getting more costly from here-on-out, DDR3 prices have already increased over the past year, that cost should jump for Microsoft as the PC shifts it's focus to DDR4 production.


This is complete nonsense, DDR3 is produced an order of magnitude more (say, 1000 DDR3 sticks for every 10 GDDR5 sticks), so it will always be much cheaper than GDDR5, even say in 5 years 100 DRR3 are produced for every 1 GDDR5. Besides it's not just volume that matter, also production difficulty. GDDR5 is just harder to produce, period, because it's better.

 

There's probably not even going to be a GDDR6. It's pretty obvious you have no tech knowledge at all. The next step for GPU's is probably stacked memory.

GDDR5 is MASSIVELY more expensive than DDR3 currently. 2GB of GDDR5 difference makes the same video card often $60 more expensive. We just dont see the difference because MS put expensive Kinect in every box. But who knows, Kinect may help Xbox outsell PS4 anyway so it might have been a good move. Most casuals that would be interested in Kinect aren't on message boards.

The ESRAM was a tradeoff. It will likely especially pay off as time goes on, it will get easier and easier to fab the SOC, but the difference in RAM price will remain a huge disadvantage for Sony forever.

Heck, another factor is in order to have a 256 bit bus, the SOC must be above a certain size, say 200mm. After one process shrink, it's possible the XBO and PS4 SOC will be the exact same size/cost. We know the XBOx SOC is 363mm, lets say the PS4 SOC is 300mm (just a guess). A shrink tends to halve the size, but remember you cant go below 200mm anyway. So after one shrink, the PS4 SOC goes to 200m, and so does the XBO. So the ESRAM wont even matter in the future.

I would expect the XBO with Kinect to drop to 399 as early as fall 2014 for example. PS4 will remain 399. MS could also probably do a 299 Kinect free SKU if they need to. Whereas Sony will be at 399 for a long time.

Another factor is profit, MS has already said XBO is break even or profitable. Sony has not said the same about PS4. Sony is obviously very good at losing money, it's what they've been doing for years. But corporate strategies dont mean the PS4 is cheaper, it just means Sony is willing to lose yet more money on PS4, and MS may not be on XBO.



Around the Network
Captain_Tom said:

Exactly what I was thinking.  People act like DDR3 was cheaper, but in the long run it could easily end up costing more.  Especially since they tried to make up for it with costly ESRAM...

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi). 

In PC CPU benchmarks, BW always wins over latency, if you increase the speed of your memory, you increase bandwidth and gain fps, even if you have to loosen the timings. GTA memory benchmarks are a good example

 

Even if you go from 1600mhz at 7-7-7-19 to 1866 at slower timings 9-9-9-24 you gain fps from the CPU side of things, so now let's imagine going from 2133 Mhz to 5500 Mhz, with equal or even worse timings..it's still gonna be faster. Modern CPU are also highly parrallel in multi core configuration, timings are becoming less and less important, like on the gpu side, BW is everything.

OMG thank you again!  It has been proven many times that higher bandwidth trumps lower latency by a considerable amount.  This latency advantage is like most other things MS has been saying:  BS.

You do realize XBO has MASSIVELY more peak bandwidth than PS4, right? 272 GB/s for XBO vs 176 GB/s for PS4. It's actually over double on a per FLOP basis.

You aren't seriously simply using the DDR3 and pretending the ESRAM doesn't exist?

 

If that's what people are generally doing, then wow, Sony has really pulled the wool over people's eyes.



Captain_Tom said:

Exactly what I was thinking.  People act like DDR3 was cheaper, but in the long run it could easily end up costing more.  Especially since they tried to make up for it with costly ESRAM...

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi). 

In PC CPU benchmarks, BW always wins over latency, if you increase the speed of your memory, you increase bandwidth and gain fps, even if you have to loosen the timings. GTA memory benchmarks are a good example

 

Even if you go from 1600mhz at 7-7-7-19 to 1866 at slower timings 9-9-9-24 you gain fps from the CPU side of things, so now let's imagine going from 2133 Mhz to 5500 Mhz, with equal or even worse timings..it's still gonna be faster. Modern CPU are also highly parrallel in multi core configuration, timings are becoming less and less important, like on the gpu side, BW is everything.

OMG thank you again!  It has been proven many times that higher bandwidth trumps lower latency by a considerable amount.  This latency advantage is like most other things MS has been saying:  BS.


No this is nonsense too. The CPU on both these machines will only be using mayube 20 GB?s on both.

 

The latency can actually make a huge difference, this is why CPU's have MB's of L1, and L2 cache, and adjusting the amount of cache has a HUGE effect on CPU performance. It prevents cache misses that cost hundreds of cycles of wait time while the CPU goes out to main memory.

 

It also depends on the benchmark. If the benchmark you linked is memory bandwidth limited, then increasing BW will help the most. But not every workload or benchmark is the same.

 

But I wouldn't worry about the CPU's, where the massively lower latency of XBO's ESRAM over PS4's GDDR5 can really pay off is with the GPU.



Do you guys know how to calculate real latency for memory ? it appears there is some misunderstanding here.

Latency is not just defined by the CAS number, it's (CL/data rate) x 2000 or (CL/base clock) x 1000.

Xone memory chips are 2133 Mhz CAS 14, that's probably really cheap DDR3, but that was the point to choose cheap memory from the start and get a huge amount of it (8 GB), so you can't blame them for this.

It's in the micron pdf related to the chips that have been identified on the inside  Xone pictures :

 


 

So their real latency is(14/2133)x2000= 13.09 ns (exactly what's described in the pdf) 

 

 

Now let's take a look at the GDDR5 produced by Samsung for the PS4, it's there :  

http://www.samsung.com/global/business/semiconductor/product/graphic-dram/resource

 

Also the Elpida equivalent in 2gb modules :

http://www.elpida.com/pdfs/E1864E10.pdf

 

Worst case scenario on both is a CL 20 (it's in the pdf) : 20/5500x2000= 7.27 ns , and that's what they announce as well (they called them "GDDR5 7 ns chips")

 

So despite all the FUD about this again, the GDDR5 is also better at real world latency because it's way faster, latency has to be defined by a certain speed or it doesn't make sense. MS had to pick way faster modules to get an advantage on latency. At 2133 Mhz you can't have both at a reasonnable price, it would have to come down as low as CAS 7 to get under 7ns latency, that kind of DDR3 is probably twice the price or more, compare to what they will have to pay for that kind of modules.

DDR3 is simply not a high end type of memory, it's a really good solution for cheap on the cpu side and you can get really decent latency at moderate speed by thightening the timings, but anyone would replace it by GDDR5 if possible, it's just better memory overhaul.  

There's no "massively lower latency" DDR3 in Xbox One,  Not even MS is using this argument, because they know they are behind on this too and it would be easily debunked, it's just their supporters that usually don't know anything about hardware, who brag on every forum that the X1 memory is Sooo much better at latency..(funny considering Xbox 360 had GDDR3 for its CPU back then)

Esram is another story and there is maybe an advantage here on very specific tasks, very demanding back and forth type of process, but there s no 8 GB, just 32 MB, and i highly doubt anyone will use eSRAM for CPU tasks, since it's always gonna starve for more on the gpu side.




You do realize XBO has MASSIVELY more peak bandwidth than PS4, right? 272 GB/s for XBO vs 176 GB/s for PS4. It's actually over double on a per FLOP basis.

You aren't seriously simply using the DDR3 and pretending the ESRAM doesn't exist?

 

If that's what people are generally doing, then wow, Sony has really pulled the wool over people's eyes.


32/8224 = 0.004.  So yeah having 0.4% of the memory a little faster than the PS4's 99.6% twice as fast memory is gonna even it out *Sarcasm*.  It is negligable buddy.  Especially since only the PS4 has hUMA.



Around the Network
petalpusher said:

Do you guys know how to calculate real latency for memory ? it appears there is some misunderstanding here.

Latency is not just defined by the CAS number, it's (CL/data rate) x 2000 or (CL/base clock) x 1000.

Xone memory chips are 2133 Mhz CAS 14, that's probably really cheap DDR3, but that was the point to choose cheap memory from the start and get a huge amount of it (8 GB), so you can't blame them for this.

It's in the micron pdf related to the chips that have been identified on the inside  Xone pictures :

 


 

So their real latency is(14/2133)x2000= 13.09 ns (exactly what's described in the pdf) 

 

 

Now let's take a look at the GDDR5 produced by Samsung for the PS4, it's there :  

http://www.samsung.com/global/business/semiconductor/product/graphic-dram/resource

 

Also the Elpida equivalent in 2gb modules :

http://www.elpida.com/pdfs/E1864E10.pdf

 

Worst case scenario on both is a CL 20 (it's in the pdf) : 20/5500x2000= 7.27 ns , and that's what they announce as well (they called them "GDDR5 7 ns chips")

 

So despite all the FUD about this again, the GDDR5 is also better at real world latency because it's way faster, latency has to be defined by a certain speed or it doesn't make sense. MS had to pick way faster modules to get an advantage on latency. At 2133 Mhz you can't have both at a reasonnable price, it would have to come down as low as CAS 7 to get under 7ns latency, that kind of DDR3 is probably twice the price or more, compare to what they will have to pay for that kind of modules.

DDR3 is simply not a high end type of memory, it's a really good solution for cheap on the cpu side and you can get really decent latency at moderate speed by thightening the timings, but anyone would replace it by GDDR5 if possible, it's just better memory overhaul.  

There's no "massively lower latency" DDR3 in Xbox One,  Not even MS is using this argument, because they know they are behind on this too and it would be easily debunked, it's just their supporters that usually don't know anything about hardware, who brag on every forum that the X1 memory is Sooo much better at latency..(funny considering Xbox 360 had GDDR3 for its CPU back then)

Esram is another story and there is maybe an advantage here on very specific tasks, very demanding back and forth type of process, but there s no 8 GB, just 32 MB, and i highly doubt anyone will use eSRAM for CPU tasks, since it's always gonna starve for more on the gpu side.


Man people like you restore my faith in humanity.  Thank you for actually doing the research so you know what you are talking about!   Seriously people it comes with a $150 Kinect.  This is a $350 console at best (Some say Kinect is almost half of the cost).



petalpusher said:

Do you guys know how to calculate real latency for memory ? it appears there is some misunderstanding here.


I think you're getting a little confused.

CAS Latency tells us how many clock cycles the memory will delay to return data requests, I.E. CL = 8 would mean 8 clock cycles to deliver data whilst DRAM with CL = 9 would take 9 clock cycles to do the same operation.
You can then extrapolate the DRAM's latency, which I outlined in my previous post.

Obviously you have other aspects of DRAM like the RAS to CAS delay which has it's own latency and such.

But it's common knowledge in the PC space that to work out the DRAMS latency is exactly how I laid it out.

I mean this: 20/5500x2000= 7.27 n
Is just wrong, you're not even using the real clock.

The latency would actually be 14.5ns if you had a CL of 20 and a 1375mhz (5500mhz) clock.

CAS Latency/Clock speed (Actual)
Thus 20/1.375 or 20/1375 * 1000

Besides, this is just the DRAM latency, we have zero idea how the eSRAM will affect the latency in the Xbox One, in some cases it might even reduce it, in other cases it will increase it due to the laws of physics.

fallen said:


This is complete nonsense, DDR3 is produced an order of magnitude more (say, 1000 DDR3 sticks for every 10 GDDR5 sticks), so it will always be much cheaper than GDDR5, even say in 5 years 100 DRR3 are produced for every 1 GDDR5. Besides it's not just volume that matter, also production difficulty. GDDR5 is just harder to produce, period, because it's better.

 


Funny.
I never said that GDDR5 wasn't more difficult to manufacture, but maybe you have missed the fact that the PC is transitioning over to DDR4?
Not sure if you were around (Or too young to remember) when the PC transitioned from EDO Ram to SD Ram or SD Ram to DDR Ram or even DDR to DDR2 and the transition from DDR2 to DDR3.
Every single time that occured the prior DRAM technology increased in price as the new DRAM standard became available (Which decreases in price, despite always being more complex than the DRAM version before it), why? Because essentially all production of the older DRAM ceases, demand for the older DRAM remains and thus the issue of Supply/Demand comes into play, prices rise on the older DRAM, prices fall on the new DRAM because people aren't buying it up in droves.

It's all simple logic really and history is there to provide plenty of proof.



--::{PC Gaming Master Race}::--


Funny.
I never said that GDDR5 wasn't more difficult to manufacture, but maybe you have missed the fact that the PC is transitioning over to DDR4?
Not sure if you were around (Or too young to remember) when the PC transitioned from EDO Ram to SD Ram or SD Ram to DDR Ram or even DDR to DDR2 and the transition from DDR2 to DDR3.
Every single time that occured the prior DRAM technology increased in price as the new DRAM standard became available (Which decreases in price, despite always being more complex than the DRAM version before it), why? Because essentially all production of the older DRAM ceases, demand for the older DRAM remains and thus the issue of Supply/Demand comes into play, prices rise on the older DRAM, prices fall on the new DRAM because people aren't buying it up in droves.

It's all simple logic really and history is there to provide plenty of proof.


Yeah I got my 8GB of 2133 MHz RAM for $42!!!  Now it is litterally double that in price!  

 

Look at DDR2 now.  It is $200 for 8GB of 667 MHz.  So basically DDR3 prices should get up to 5x the price they are now within 5 years.  Oh and GDDR5 isn't insaney expensive in all honesty.  The 2GB DDR3 7730 is in fact $20 MORE than the 1GB GDDR5 version...