disolitude said:
drkohler said:
disolitude said:
There is no reason to think Ms R&D is so incompetent ..
|
There is no question that MS R%D knew what they started designing a few years ago. Just a few things to clear up some miconceptions why it ended where we are now:
First of all, both companies decided on the memory layout. Either unified memory or separate pools. No surprise here that both companies chose unified memory, as the XBox360 had a clear advantage over the PS3.
Next, the design goal for the XBox One was set: "The ultimate media hub/gaming console". Right from the start, it was clear that 4GByte of main memory would be a very, very tight solution, and I am pretty sure 8GByte memory was the undisputed starting point. Now around 2008-2009 when development started, there was only one memory type that was capable of delivering 8GByte, and it was foreseeable that faster ddr3 ram would be available in the future. 8GByte gddr5 was (technically) impossible to achieve (it would have required a 512bit bus with 32 chips in clamshell mode which was way too costly to even consider). Hence the PS4 always had 4G of gddr5 ram (until the very, very last minute when the higher capacity 4Gb chips became available/affordable, and could boost the PS4 to 8GB without any (or minor) design changes to the mb.
The last decision was how to "beef up" the gpu memory access on the XBox One. Using embedded ram (either dram or sram) is a no-brainer for this purpose (both WiiU and XBox One use it). The PS4 could have had such a cache, too (and it was a possibility as we know now from a M.Cerny talk). The PS4 ended up with a single pool of 176G/s bandwidth, of which probably 150G go to the gpu on a sunny day (good enough for 1080p and optional gpgpu stuff) and 20G/s go to the rest (which is more than enough for game and os software). Why MS went with a huge 32MB esram and not 64M edram is their secret (insane, but would actually take a little less die space than 32M esram. On the other hand, WiiU shows that you can somehow get away with 32M edram but they were targeting a 40nm process, and in 2008 32M was considered insane. If they started developing now, everyone would probably go for stacked ram and forget all about caches).
The XBox One has a bit of a problem here. The esram has 102G/s throughput, already less than the 150G/s the PS4 gpu has. Unfortunately, the cache/gpu can only be "filled" with a less than (because frame buffer readout/updates locks you out 60 times per second) 68G/s data from the main ddr3 ram. Since the cache is only 32M (roughly half of which is reserved to frame buffers), there is not enough space to hold sizable textures etc in the cache, they have to be pumped from the "slow" ddr3 ram. We immediately see that the XBox One would profit from 176G/s gddr5 ram: it would greatly reduce the time to pump memory around from one pool to the other pool which blocks the entire system (there is a lot of stuff going on in the ddr3 ram, just think of Kinect2 data and all the multimedia stuff). This is what shocked MS when Sony announced the 8G bomb, in hindsight they could have had the unified 8G gddr5 AND the cache and have the superior harware...
|
This sounds like a possible theory...its certanly fleshed out pretty well.
However few things stand out. MS designed and manufactured the xbox 360 in 6 months after Nvidia cut them off for the Xbox GPU. It makes no sense for them to plan hardware specs in 2008-2009. More like 2011...
Secondly, since AMD was producing both, Microsoft must have known Sony is using gddr5 RAM. 4 or 8 gb wouldnt change gamping performance much, especially considering MS has 3 GB of 8 reserved for OS. If they felt PS4 was more powerful to an extent it would be a major issue with X1, they would have revised the console. They certanly have the money to introduce 5 Gb gddr5 + 3 gb ddr3 for the OS mid 2012 which is when i suspect x1 prototype manufacturing began.
|
I can attest that the 4Gb chips only went MP (mass production) recently from Samsung (I rep them). So even if Microsoft started desinging the Xbox One closer to 2011, DDR3 would be the safer bet in terms of capacity and price.
I have no doubt that Samsung will be able to crank out 4Gb chips to meet demand (their FABs are absolutely rediculous, there is a reason they are #1), but COST is a slight concern. These newer 4Gb density GDDR5 chips are not going to be cheap, at least not when compared to DDR3 or even lower density (2Gb) GDDR5. Will the price drop over time? Sure, but DDR3 will always maintain a [ever shrinking delta of a] price gap over time due to its insane usage in the PC and server markets. I have funny feeling that Sony is taking a shot in the shorts on pricing, at least in the first year or so, in order to really attempt to distance themselves from Microsoft.
Microsoft having a high resolution Kinect bundled with every system is not helping price for them. Unlikely they will match Sony's price, as M is not in the business of losing money.
We can all harp all we want on performance, but TIMING of available product that met capacity (8 GB of GDDR5 vs only 4), and COST probably were the two biggest factors in Microsoft's decision. Sony took a slight gamble that might have paid of. Companies take gambles all the time (PS3 anyone?). Some work, some don't.
THAT ALL BEING SAID. It really doesn't matter aside from the launch price difference. Gamers go where the games are, and you can bet your ass the M is going to money hat exclusives, timed exclusives, an press as best they can. In addition, they have strong backing and momentum in the US. Time will tell if the 100 launch price difference hurts them badly or not.