By using this site, you agree to our Privacy Policy and our Terms of Use. Close
drkohler said:
HappySqurriel said:

Edit: essentially, by caching data in edram rather than grabbing it from main memory, Nintendo could be eliminating 50% to 95% of the data transfer across the memory bus making the difference between a $4 memory module and a $400 memory module meaningless.

Dude, the eDram is a FRAME BUFFER, not a cache. The CPU probably doesn't even see it....

And no, when three processors (CPU, GPU and DSP) fight for ram over a single 64bit bus, then there is bus contention issus, all the time.


and you have documentation for this I assume?

With the Gamecube the GPU had 3MB of built in memory, 2MB of which was dedicated to frame buffers while the other 1MB was used as a texture cache; the Flipper supported (up to) 9 to 1 texture compression, which allowed (in many cases) the Gamecube to have up to (about) 3 Million texels in memory (or roughly 6 texels for every pixel they rendered).

Why would Nintendo take an approach with the Gamecube that worked so well, roughly emulate it with the Wii U, and ignore one of the biggest advantages it gave them?

 

Every post of yours generally requires the assumption that Nintendo's engineers are incompetent morons, my assumption is that their design philosophy is different than Microsoft and Sony. It is likely that they have enough edram on their CPU/GPU that, when used appropriately, far less data is being transferred across the bus to really need super fast memory; and this would likely explain earlier comments from developers praising the memory architecture, and I would expect that once you cached textures memory bandwith isn't a bottleneck on the Wii U like it is on the XBox 360 or PS3.