bonzobanana said:
The xbone needs about 125 gb/s memory bandwidth to match the 176gb/s of ps4 considering it has a 40% weaker gpu than ps4 and most of main memory is occupied with graphics data movement so its not bottlenecked. xbox one memory is 68 gb/s plus there is also the issue that many xbox one games suffer from more frame drops even when run at a lower resolution despite a slightly faster cpu setup than ps4. So clearly the 32MB of SRAM is not enough to fulfill the shortfall despite being far,far faster than eDRAM. The 10MB edram on 360 was enough to give it a frame rate advantage over many ps3 games slightly as long the resolution was a fit for 10MB, the ps3 supports a much wider range of 1080p and 3D games that require larger frame buffers. Also that 10MB edram for 1/2GB in 360 is a higher ratio than 32MB for 2GB admittedly the operating system may have less call on time critical memory access although I'm only guessing that. Let's face it the consoles without small pockets of high speed memory but with reasonable bandwidth for main memory and/or graphics memory achieve a lot more. The PS4 clearly does, as does the ps3 despite having a much weaker gpu than 360 for properly optimised games as did the original xbox the generation before. A small amount of high bandwidth memory is really restricting for ambitious games and seems to have a common symptom, frame rate drops. Both xbox one and wii u suffer from it horribly. The 360 doesn't but then its main memory wasn't slow it was almost twice the bandwidth of wii u memory so for the 360 the eDRAM was a performance bonus that improved the console's games it wasn't used as a solution to using cheaper slower main memory. Also the wii punched above its weight despite only having a 11gflops gpu (xenoblade) and that design has a dedicated 1meg texture cache, 2meg frame buffer, 24MB 1T-RAM and 64GB of DDR buffer memory that's 4 pools of memory in addition to all the other smaller caches in its design. Obviously mainly inherited from the gamecube it was based on but still huge bandwidth all things considered. S |
Wii emulation doesn't mean at all that the memory bandwidth must be the same or even similar.
As to the eDRAM not being enough, at least one dev begs to differ:
http://www.vg247.com/2012/11/05/wii-u-avoids-ram-bottleneck-says-nano-assault-dev/
http://hdwarriors.com/why-the-wii-u-is-probably-more-capable-than-you-think-it-is/
http://thewiiu.com/topic/7747-interesting-article-regarding-cpugpu-in-wii-u/
"The performance problem of hardware nowadays (Interview circa 2012) is not clock speed but ram latency. Fortunately Nintendo took great efforts to ensure developers can really work around that typical bottleneck on Wii U. They put a lot of thought on how CPU, GPU, caches and memory controllers work together to amplify your code speed."
"Nintendo made very wise choices for cache layout, RAM latency and RAM size to work against these pitfalls"
"The Wii U eDRAM has a similar function as the eDRAM in the XBOX360. You put your GPU buffers there for fast access. On Wii U it is just much more available than on XBOX360, which means you can render faster because all of your buffers can reside in this very fast RAM. On Wii U the eDRAM is available to the GPU and CPU. So you can also use it very efficiently to speed up your application."
"Theoretical RAM bandwidth in a system doesn’t tell you too much because GPU caching will hide a lot of this latency."
And again, the GDDR3 in the Wii was not just a buffer. You're thinking of the Gamecube, which used slower DRAM.