| fordy said:
|
Wow.. a "s/he said..s/he said" thread... grrreat.
Look, it is apparent you don't know how memory access works. Yes, gddr5 is here to handle large chunks of data. Guess what, ddr3 memory is also here to handle large chunks of data! My first post tried to explain it in simple terms why the ggrd5 latency problem has become a myth (assuming what I called "ugly" programming). Technically, there are a lot of what-ifs and things get rather complicated rather fast even though we are "only" dealing with a "get some memory" problem.
What you apparently don't know is the simple fact that every memory controller in every gpu/cpu in the world has a limited burst length. Whether it is a memory controller in a gpu or a memory controller in a cpu, they HAVE to use multiple burst sequences to do whatever they are supposed to do. If you want to learn what this means on the transistor level, you'd have to find controller manuals and figure out the timing diagrams (did I mention that I DESIGNED memory cards and memory controllers decades ago?).
In the case of a gpu, many, many, many bursts may target consecutive memory addresses. In the case of a cpu, many, many, many bursts may target consecutive memory addresses ("ugly" programming"), or small bursts target non-aligned target addresses ("clean, old style programming). The nez result is that bandwidth wins over latency since today's caches are so big that many, many, many bursts happen more often than single bursts.
This is getting waaaay too technical. So here is the ultimate result: 8Gb gddr5 in the PS4 wins hands down against any other pc-like setup. (It will be interesting to study the MS solution in the NextBox in detail, should that ever be revealed), but it is already known they use dedicated hardware "move" engines to connect ddr3 to edram to gpu to remedy the bandwidth problem).
....and this ends my contribution to the thread.







