BlueFalcon said:
If MS wants more system memory, it might have been way too costly to use 8GB of GDDR5. A compromise could be a custom 256-bit bus over DDR3-2133? That gives us exactly the rumored 68.256GB/sec (256 bit x 2133mhz / 8 bits per byte) |
There is no way they could ever do 8GB of GDDR5, hell 4GB is already pushing the limits.
BlueFalcon said:
You make a good point. But even if HD7970M in PS4 has half of the memory bandwidth it might have dedicated GDDR5 for the GPU. You still have not addressed the other issue I talked about regarding sharing DDR3 through the northbridge = a large latency penalty. HD5450 and E-350 (6310) have the same number of shader processors but the HD5450 has a dedicated 1.6GHz memory bus to feed it. Look at the huge performance penalty incurred by a GPU with shared DDR3 memory over the NorthBridge (the Xbox 720 approach): http://images.anandtech.com/graphs/graph4023/34111.png http://images.anandtech.com/graphs/graph4023/34112.png http://images.anandtech.com/graphs/graph4023/34113.png "Compared to the Radeon HD 5450 the 6310 offers between 66 - 69% of its performance in our GPU bound tests. The performance reduction is entirely due to the 6310's limited memory bandwidth being shared with the dual Bobcat cores on-die." From what I am seeing, the memory bandwidth limitation and the additional latencies due to GPU having to go through the NorthBridge to communicate with the system memory will produce at least a 30% performance hit, and that's assuming MS is able to maintain the memory bandwidth at HD7770Ghz level to begin with. So you have a double-whammy: the additional middleman NorthBridge latency + reduction in memory bandwidth on the Xbox 720 due to lack of GDDR5. |
GPUs are not really latency dependednt at all tho, GDDR5 inherently has much higher latency than DDR3. And after all the issues that the split pool caused on the PS3 I really doubt that the PS4 won't be a unified memory pool. The Move Engines and the ESRAM are also likely meant to facilitate memory management, I imagine they will also have custom memory controllers.
Also the E-350 GPU is clocked at 492MHz vs the HD5450's 650Mhz, and the APU has to share a single 64-Bit channel. Not exactly a fair contest and the X720 will have multiple channels and again ESRAM to fall back on with a system designed for high GPU performance.
BlueFalcon said:
If eSRAM was a better approach to having a dedicated GDDR5 memory, then GPU makers would solder dedicated eSRAM/eDRAM chips next to the GPU die on the same package and save $ on not having to use the more expensive GDDR5/wider bus widths feeding the GPU. If you look at Wii U's crippled memory sub-system, it actually goes with a similar cost savings route by neutering the GPU with DDR3 and trying to make up for it with eDRAM on the die. MS promised us nearly 'free 4xAA' with Xbox 360 due to its embedded RAM and we got nothing of the sort. Sorry, but I am smelling more marketing BS from them on this one. Xbox 1 was the most powerful console in its generation and it was the closest to a PC design. It appears PS4's GPU subsystem mimics the approaches used on the PC a lot closer as well. I am going to lean towards the PC approach being superior for performance and efficiencies because that's the approach used for $500-1000 PC gaming GPUs. Anything else smells like cost cutting. Ironically, eDRAM allows larger amounts of memory to be installed on smaller chips compared to eSRAM (about a 3x area savings vs. eSRAM). If MS is using eSRAM instead of the more expensive eDRAM, this only continues to highlight all these areas they are trying to save $$$ in my eyes. I hope I am wrong. Of course all these are just rumors but MS's console seems to be pushing marketing to the average consumer ("Oh it has 8GB of system memory, it must be awesome!!!!"), and some "magic sauce" modules. For all we know the Video Codecs and Data Move Engines could be used for recording TV shows directly onto the Xbox 720 and have nothing to do with aiding graphics. This "secret sauce" hidden deep within Xbox 720 is not inspiring much confidence at the moment. |
It's not eactly a better aproach, it is just a way of mitigating the bottleneck of the the slower DDR3. You can't really use the aproach on PC for one main reason. The first is that Direct X does not support it, and even if it did you would still need each game to be programed around making use of it. Add in the fact on PC you have a very wide range of resolutions and framebuffer formats and suddenly it becomes highly impractical to implement.
And the X360's eDRAM did provide nearly free 4X MSAA, the problem is MSAA is only supported by forward renderers (most major games/engines used a differed lighting solution these days) and also the framebuffer hs to fit in the 10MB. There is a reason why games like Alan Wake can afford to have 4X MSAA and smooth performance and that is because they use a forward renderer and a native resolution low enough to fit in eDRAM. That is also why COD games also have MSAA and performe better on X360 than PS3, because their framebuffer is small enough to allow for MSAA to fit in the 10MB framebuffer.
@TheVoxelman on twitter