By using this site, you agree to our Privacy Policy and our Terms of Use. Close
petalpusher said:

There s a real possibility MS will be able to use DDR4 later in X1 cycle. If their controller is designed to support it and it's very likely it will. There s not much difference between the two standards and the DDR3 they picked is precisely where the DDR4 begins (2133 Mhz)


The Xbox One wouldn't have a DDR3+DDR4 memory controller, that costs transisters, the transister budget for the Xbox One's APU is already massive, DDR4 memory controllers would also need more transisters than a DDR3 memory controller.
It's why AMD ditched the DDR2+DDR3 memory controller after the Phenom 2, why waste transisters when that could be used for more cache or another core?

petalpusher said:

As for the old debate about latency. X360 never had problem using GDDR3 for its CPU. The whole thing is some kind of urban legend, GDDR latency are really good, in fact GDDR memory are just better memory chips overhaul (that comes with a price), they arent behind in latencies they are just way ahead in base cock and effective bw (cause of the x4 multi).


GDDR latency in terms of latency per-clockrate has never been good, GPU's tend not to be latency sensitive, it makes a big difference having low latency memory when a CPU has a stall.
This is also why Enthusiasts have always pushed for lower latency memory on AMD platforms, AMD's predictor is incredibly poor compared to Intels, so when it has a cache miss-hit and thus has to travel all the way down to system Ram... It receives a performance penalty.
Granted, newer (Higher-end) CPU's from AMD won't see much difference, because well. They have large and fast caches, Kabini/Jaguar is a completely different beast however, essentially in plain terms, they're simple "cheap crap.".
Intel has done allot of work on that front since the Core 2 days, so using them as an example is always bad form.

Also, keep in mind, there won't be as much buffering going on in the consoles as there is on PC in various stages, for multiple reasons, which also hides latency.

And really despite timings increasing over the years for system Ram (Which is to keep up with the clockrate), latency hasn't actually changed.
Grab some DDR3 1600mhz memory, that's 800mhz IO, which has a typical CAS latency of 8, that means it has a latency of 10ns.
Grab some DDR2 800mhz memory, that's 400mhz IO, which has a typical CAS latency of 4, this is also 10ns.

Now with GDDR5 the data rates are 4x faster than the IO clock instead of 2x, I.E. 5ghz GDDR5 is 1.25ghz x4 and would have a CAS Latency of 15.
15/(1.25 GHz) = 12 ns

If you were to benchmark RAM with 10ns memory and 12ns, you will notice a difference, synthetics usually show larger differences than what really happens  in real world applications, however, there still is a difference.


So essentially, if the CPU is being utilised fully, which we should assume in a fixed hardware environment, the performance delta will be bigger on the consoles than what you see on the PC, where large portions of a CPU might actually go without being fully utilised.
GPU's it's no big deal.
In the end, we already knew the Xbox One has a faster CPU than the Playstation 4 anyway, the difference for next generation however is that, it's simply not going to matter, the Playstation 4's GPU has extra resources to offload some of the more parallel tasks the CPU might handle anyway.



--::{PC Gaming Master Race}::--