drkohler said:
There is no question that MS R%D knew what they started designing a few years ago. Just a few things to clear up some miconceptions why it ended where we are now: First of all, both companies decided on the memory layout. Either unified memory or separate pools. No surprise here that both companies chose unified memory, as the XBox360 had a clear advantage over the PS3. Next, the design goal for the XBox One was set: "The ultimate media hub/gaming console". Right from the start, it was clear that 4GByte of main memory would be a very, very tight solution, and I am pretty sure 8GByte memory was the undisputed starting point. Now around 2008-2009 when development started, there was only one memory type that was capable of delivering 8GByte, and it was foreseeable that faster ddr3 ram would be available in the future. 8GByte gddr5 was (technically) impossible to achieve (it would have required a 512bit bus with 32 chips in clamshell mode which was way too costly to even consider). Hence the PS4 always had 4G of gddr5 ram (until the very, very last minute when the higher capacity 4Gb chips became available/affordable, and could boost the PS4 to 8GB without any (or minor) design changes to the mb. The last decision was how to "beef up" the gpu memory access on the XBox One. Using embedded ram (either dram or sram) is a no-brainer for this purpose (both WiiU and XBox One use it). The PS4 could have had such a cache, too (and it was a possibility as we know now from a M.Cerny talk). The PS4 ended up with a single pool of 176G/s bandwidth, of which probably 150G go to the gpu on a sunny day (good enough for 1080p and optional gpgpu stuff) and 20G/s go to the rest (which is more than enough for game and os software). Why MS went with a huge 32MB esram and not 64M edram is their secret (insane, but would actually take a little less die space than 32M esram. On the other hand, WiiU shows that you can somehow get away with 32M edram but they were targeting a 40nm process, and in 2008 32M was considered insane. If they started developing now, everyone would probably go for stacked ram and forget all about caches). The XBox One has a bit of a problem here. The esram has 102G/s throughput, already less than the 150G/s the PS4 gpu has. Unfortunately, the cache/gpu can only be "filled" with a less than (because frame buffer readout/updates locks you out 60 times per second) 68G/s data from the main ddr3 ram. Since the cache is only 32M (roughly half of which is reserved to frame buffers), there is not enough space to hold sizable textures etc in the cache, they have to be pumped from the "slow" ddr3 ram. We immediately see that the XBox One would profit from 176G/s gddr5 ram: it would greatly reduce the time to pump memory around from one pool to the other pool which blocks the entire system (there is a lot of stuff going on in the ddr3 ram, just think of Kinect2 data and all the multimedia stuff). This is what shocked MS when Sony announced the 8G bomb, in hindsight they could have had the unified 8G gddr5 AND the cache and have the superior harware... |
This sounds like a possible theory...its certanly fleshed out pretty well.
However few things stand out. MS designed and manufactured the xbox 360 in 6 months after Nvidia cut them off for the Xbox GPU. It makes no sense for them to plan hardware specs in 2008-2009. More like 2011...
Secondly, since AMD was producing both, Microsoft must have known Sony is using gddr5 RAM. 4 or 8 gb wouldnt change gamping performance much, especially considering MS has 3 GB of 8 reserved for OS. If they felt PS4 was more powerful to an extent it would be a major issue with X1, they would have revised the console. They certanly have the money to introduce 5 Gb gddr5 + 3 gb ddr3 for the OS mid 2012 which is when i suspect x1 prototype manufacturing began.







