By using this site, you agree to our Privacy Policy and our Terms of Use. Close
megafenix said:

come on dude, even by today pc standards like ddr3 of 50GB/s, 70GB/s fall short considering we are talking about embedded edram directly in the gpu die, not main ram like the ddr3

 

also, shinen comented that with just 7 megabytes of edra of wiiu you can do 720p with double buffering while with xbox you need the full 10 negas for that, and even miocrosoft admits it(doesnt this kind of suggest that you get the same bandwidth with 7 megabytes of edram that you would get with the xbox 10 megabytes?, remeber that the actual bandwidth of 256gb/s for the rops is later limited by a channel of 32gb/s, and wiiu gpu ssoesnt have this bottleneck so sounds about right)


I'll chime in here.
If you run Quad-Channel DDR3 @ 3,000mhz you can get a bandwidth of 96GB/s a second on the PC.
DDR4 is also coming.
Bandwidth = DDR Clock rate x bits per clock / 8.

However, from my own tests (As I have a Quad-Channel DDR3 set-up) memory bandwidth isn't actually that important, games practically see neglible difference regardless if you have 12GB/s of memory bandwidth or 90GB/s of memory bandwidth.
Remember the PC in general doesn't share it's system memory bandwidth with a Graphics processor, so there is far less memory bandwidth demands placed on it.
The general consensus on the PC is more Ram is preffered over faster ram, because if you run out of Ram, you will take a much larger performance penalty. (IGP's not withstanding.)
Because of the PC's constantly evolving nature, Graphics processors long out-stripped the bandwidth that the rest of the system could provide, not just in terms of memory bandwidth but interconnect bandwidth too, hence why GPU's come with dedicated high-speed GDDR5 memory which will soon be supplanted with GDDR6 memory, consoles can get away with a shared memory/edram set-up becuase, well. They aren't trying to push 20 Teraflops of single precision floating point and run at resolutions like 1440P, 1600P, 2k and 4k resolutions, 8-16X Anti-Aliasing with all the other bells and whistles.

Then we fast forward to APU's and Integrated graphics.
Intel wen't with an eDRAM approach with Iris Pro, however they included 128MB of the stuff, now regardless of the bandwidth, it's still slow because the rest of the GPU can't make use of it all, however in Intel's case, the CPU treats it as an L4 cache so it's not entirely going to waste.
But just because you have 1024GB/s of memory bandwidth, if your GPU is sub-par it's going to be a waste.

As for the 4k-8k textures, it's not entirely impossible, the Wii U does have a relatively modern GPU architecture complete with a modern implementation of texture compression (3dc), it wouldn't be surprising to see 8:1 or even 16:1 levels of texture compression, which is going to save a massive amount of memory and bandwidth.
The downside to using lots of compression is that you create allot of artifacts in the textures, however that's really not an issue for a game such as this due to the fact you aren't going to be getting down close-and-personal with the textures to see it. (Not to mention how fast your movement is.)

Basically it's a concession, but it should result in an overall larger improvement in image quality in the end, but it's not something all games can or will use.



--::{PC Gaming Master Race}::--