| disolitude said: Guys guys...GDDR5, the Cell...no need to continue arguing about that. My point here is this... Xbox one is an AMD GPU with 768 cores which puts it between HD 7770 and H7790 in terms of power. Those cards come with GDDR5 RAM and Xbox One has higher memory bandwidth than both. It doesn't just need GDDR5 to make the most of the tech it has under the hood. |
Actually, it's most likely based on Bonaire (7790, more similar to Pitcairn and Tahiti than to Cape Verde), with 2CUs less. Before launch, 7790 was supposed to come with 256bit bus, but production cards ended up with 128bit bus and lower bandwidth (92GB/s). Though on paper it's looks like 7770 equivalent, XOne's better memory susbsystem should most likely, as you already noticed, put it in between 7770 and 7790.
Still:
"With graphics, the first bottleneck you’re likely to run into is memory bandwidth. Given that 10 or more textures per object will be standard in this generation, it’s very easy to run into that bottleneck," Cerny said. "Quite a few phases of rendering become memory bound, and beyond shifting to lower bit-per-texel textures, there’s not a whole lot you can do. Our strategy has been simply to make sure that we were using GDDR5 for the system memory and therefore have a lot of bandwidth."
Another advantage of GDDR5 (at that bandwidth) seems to come from different perspective:
"One thing we could have done is drop it down to 128-bit bus, which would drop the bandwidth to 88 gigabytes per second, and then have eDRAM on chip to bring the performance back up again," said Cerny. While that solution initially looked appealing to the team due to its ease of manufacturability, it was abandoned thanks to the complexity it would add for developers. "We did not want to create some kind of puzzle that the development community would have to solve in order to create their games. And so we stayed true to the philosophy of unified memory."
Not sure how much this will influence 3rd party devs, but it seems that PS4's approach is simpler to use from the start - this will probably not affect bigger devs, but it can help smaller and indie devs making console games for the first time.
Someone, somewhere (I think it was actually here on VGChartz) had a good observation about all this - DDR4 was late, GDDR5 was too expensive, so MS decided to go DDR3 + enhancements approach. Sony made a gamble - and it payed off.
IMO, what is most interesting in PS4's architecture is not GDDR5, but enhancements built into GPU - direct bus from GPU to CPU, that bypasses cache, reducing synchronization issues between the two and significantly reduced overhead of running compute and graphics task at the same time, due to two additional enhancements to GPU.
"There are many, many ways to control how the resources within the GPU are allocated between graphics and compute. Of course, what you can do, and what most launch titles will do, is allocate all of the resources to graphics. And that’s perfectly fine, that's great. It's just that the vision is that by the middle of the console lifecycle, that there's a bit more going on with compute."
Full article @ http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php







