By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

megafenix said:

slower?

 even if doenst match the bandwichith of the wiiu edram it still rules on latency, which is also imprtanta and we already saw that  on the gamecubevs xbox era,  so at the end of the day both memories might give the same results. Of course that wanst the only reason they changed the approach

 

here, but you have to read

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"

"This controversy is rather surprising to me, especially when you view as ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it," explains Andrew Goossen.

"We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM. Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM.

"The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said, 'gosh, it would sure be nice if an entire render target didn't have to live in eDRAM' and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3, so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go... From my perspective it's very much an evolution and improvement - a big improvement - over the design we had with the Xbox 360. I'm kind of surprised by all this, quite frankly."

"

Latency means jack shit in rendering but thanks for dodging my question ...

The rest is irrelevant information to me. 



Around the Network

Please. Stay away from discussing the Wii U hardware. All it does is create an endless cycle of arguing and animosity.



fatslob-:O said:
megafenix said:

slower?

 even if doenst match the bandwichith of the wiiu edram it still rules on latency, which is also imprtanta and we already saw that  on the gamecubevs xbox era,  so at the end of the day both memories might give the same results. Of course that wanst the only reason they changed the approach

 

here, but you have to read

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"

"This controversy is rather surprising to me, especially when you view as ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it," explains Andrew Goossen.

"We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM. Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM.

"The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said, 'gosh, it would sure be nice if an entire render target didn't have to live in eDRAM' and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3, so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go... From my perspective it's very much an evolution and improvement - a big improvement - over the design we had with the Xbox 360. I'm kind of surprised by all this, quite frankly."

"

Latency means jack shit in rendering but thanks for dodging my question ...

The rest is irrelevant information to me. 

 

thats not avoiding, i am marely explaining why micro went with esram instead of edram eventhough would be more expensive and thus they couldnt afford for more bandwidth due to the price, but the main purpse of combining the esram and ddr3 logic was what was important to them

 

read dude, read

 

latency aint important?

dude seriosuly you need to read

http://www.notenoughshaders.com/2012/11/03/shinen-mega-interview-harnessing-the-wii-u-power/

"

When testing our first code on Wii U we were amazed how much we could throw at it without any slowdowns, at that time we even had zero optimizations. The performance problem of hardware nowadays is not clock speed but ram latency. Fortunately Nintendo took great efforts to ensure developers can really work around that typical bottleneck on Wii U. They put a lot of thought on how CPU, GPU, caches and memory controllers work together to amplify your code speed. For instance, with only some tiny changes we were able to optimize certain heavy load parts of the rendering pipeline to 6x of the original speed, and that was even without using any of the extra cores.

"



megafenix said:
fatslob-:O said:
megafenix said:
fatslob-:O said:
Darc Requiem said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....


I thought something was fishy. According to his calculations wouldn't the bandwith of the Wii U's EDRAM be 74GB/s ?

It is but when you have liars like megafenix spreading misinformation to the ill informed this happens ...

 

liers ike youself right?

t official source abput the 256gb/s besides the ones i mentiond?

how about someone from micro itself?

http://www.cis.upenn.edu/~milom/cis501-Fall08/papers/xbox-system.pdf

 

when we have trollers like you this place stink

Like I said, "interconnect bandwidth means jack shit". What part do you not understand ? 

 

bandwidth is important more than you think, if not why would AMD and nvidia increase the internal bandwidth of the gpu  generation?

is imprtant for many things like vertext texture fetches and other things, and i dot say this, its expert people on the topic, like people from nvidia

 

but if you at least read the whole article from gaming blend you will see that he doesnt say this himeself but that consulted people for this

 

want their comments?

here

http://developer.amd.com/wordpress/media/2012/10/Tatarchuk-Tessellation(EG2007).pdf

 

 

Because with every generational of graphics cards both Nvidia and AMD increase the processing capabilities, more processors requires more bandwidth to keep them fed efficiently.

Right now you are saying that the WiiU has over 500GB/s bandwidth, the recently released Titan Black (Nvidias flagship graphics card) has a total bandwidth of 336GB/s, are you proposing that the WiiU is now some how more capable than the Titan Black? If so, how would you explain the Titan's raw graphical capabilities which completely crush the WiiU ability despite the Titans lower bandwidth?



jake_the_fake1 said:

 

Because with every generational of graphics cards both Nvidia and AMD increase the processing capabilities, more processors requires more bandwidth to keep them fed efficiently.

Right now you are saying that the WiiU has over 500GB/s bandwidth, the recently released Titan Black (Nvidias flagship graphics card) has a total bandwidth of 336GB/s, are you proposing that the WiiU is now some how more capable than the Titan Black? If so, how would you explain the Titan's raw graphical capabilities which completely crush the WiiU ability despite the Titans lower bandwidth?

He won't be doing that because he's just a fraud computer engineer ... *HAHAHA*



Around the Network
jake_the_fake1 said:
megafenix said:
fatslob-:O said:
megafenix said:
fatslob-:O said:
Darc Requiem said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....


I thought something was fishy. According to his calculations wouldn't the bandwith of the Wii U's EDRAM be 74GB/s ?

It is but when you have liars like megafenix spreading misinformation to the ill informed this happens ...

 

liers ike youself right?

t official source abput the 256gb/s besides the ones i mentiond?

how about someone from micro itself?

http://www.cis.upenn.edu/~milom/cis501-Fall08/papers/xbox-system.pdf

 

when we have trollers like you this place stink

Like I said, "interconnect bandwidth means jack shit". What part do you not understand ? 

 

bandwidth is important more than you think, if not why would AMD and nvidia increase the internal bandwidth of the gpu  generation?

is imprtant for many things like vertext texture fetches and other things, and i dot say this, its expert people on the topic, like people from nvidia

 

but if you at least read the whole article from gaming blend you will see that he doesnt say this himeself but that consulted people for this

 

want their comments?

here

http://developer.amd.com/wordpress/media/2012/10/Tatarchuk-Tessellation(EG2007).pdf

 

 

Because with every generational of graphics cards both Nvidia and AMD increase the processing capabilities, more processors requires more bandwidth to keep them fed efficiently.

Right now you are saying that the WiiU has over 500GB/s bandwidth, the recently released Titan Black (Nvidias flagship graphics card) has a total bandwidth of 336GB/s, are you proposing that the WiiU is now some how more capable than the Titan Black? If so, how would you explain the Titan's raw graphical capabilities which completely crush the WiiU ability despite the Titans lower bandwidth?

 

 

remember that the edrm on wii u works as a big cache not as big ram

do you even know how much bandwidth the internal parts of even a low end gpu can handle?

 

here

http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-case-study-fast-fourier-transform-part-ii/

"

OpenCL™ Optimization Case Study Fast Fourier Transform – Part II

 

Why to use Local Memory?

Local memory or Local Data Share (LDS) is a high-bandwidth memory used for data-sharing among work-items within a work-group. ATI Radeon™ HD 5000 series GPUs have 32 KB of local memory on each compute unit. Figure 1 shows the OpenCL™ memory hierarchy for GPUs [1].

 

Figure 1: Memory hierarchy of AMD GPUs

Local memory offers a bandwidth of more than 2 TB/s which is approximately 14x higher than the global memory [2]. Another advantage of LDS is that local memory does not require coalescing; once the data is loaded into local memory, it can be accessed in any pattern without performance degradation. However, LDS only allows sharing data within a work-group and not across the borders (among different work-groups). Furthermore, in order to fully utilize the immense potential of LDS we have to have a flexible control over the data access pattern to avoid bank conflicts. In our case, we used LDS to reduce accesses to global memory by storing the output of 8-point FFT in local memory and then performing next three stages without returning to global memory. Hence, we now return to global memory after 6 stages instead of 3 in the previous case. In the next section we elaborate on the use of local memory and the required data access pattern.

"

 

and thats only for one local data share, each simd core has a local data share, so, how much bandwidth would you get for a gpu of marely 4 to 5 simd cores?

and we still havent accounted the texture units l1 cache bandwidth or the global dta share bandwidth



megafenix said:

thats not avoiding, i am marely explaining why micro went with esram instead of edram eventhough would be more expensive and thus they couldnt afford for more bandwidth due to the price, but the main purpse of combining the esram and ddr3 logic was what was important to them

You didn't explain shit! You just dodged the question once again ... Why is microsoft's implimentation of it's memory subsystem lower than the WII Us ? 

read dude, read

 

latency aint important?

dude seriosuly you need to read

http://www.notenoughshaders.com/2012/11/03/shinen-mega-interview-harnessing-the-wii-u-power/

"

When testing our first code on Wii U we were amazed how much we could throw at it without any slowdowns, at that time we even had zero optimizations. The performance problem of hardware nowadays is not clock speed but ram latency. Fortunately Nintendo took great efforts to ensure developers can really work around that typical bottleneck on Wii U. They put a lot of thought on how CPU, GPU, caches and memory controllers work together to amplify your code speed. For instance, with only some tiny changes we were able to optimize certain heavy load parts of the rendering pipeline to 6x of the original speed, and that was even without using any of the extra cores.

"

Then how come the R9 290X and the Titan Black shits on the WII U despite it having more bandwidth and a lower latency ? 



drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....

So actually 70GB/s then.



“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.” - Bertrand Russell

"When the power of love overcomes the love of power, the world will know peace."

Jimi Hendrix

 

binary solo said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....

So actually 70GB/s then.

That was obvious otherwise the WII U's latte die would be a fair bit more larger. 



fatslob-:O said:
jake_the_fake1 said:

 

Because with every generational of graphics cards both Nvidia and AMD increase the processing capabilities, more processors requires more bandwidth to keep them fed efficiently.

Right now you are saying that the WiiU has over 500GB/s bandwidth, the recently released Titan Black (Nvidias flagship graphics card) has a total bandwidth of 336GB/s, are you proposing that the WiiU is now some how more capable than the Titan Black? If so, how would you explain the Titan's raw graphical capabilities which completely crush the WiiU ability despite the Titans lower bandwidth?

He won't be doing that because he's just a fraud computer engineer ... *HAHAHA*


pff, ahahha, already responded your and his question with ease, no trouble here and the source its legitimate

ahahha

bufafaf, ahhahha

you suck dude, is that the best you can do?

how about you respond my questions for a change?

 

tell me, hoe much performance you get sing tesselationa nd dispacements?

or tell me which is a beter technique approach?

cat mull or the bicubic Bezier?

 

seriosuly its not my fault that renesas and shinen say wii u has plenty of bandwith, aint might fault that the formula gives 563.2gb/s

 

8macros*550mhz*1024bits/(8bits*1000)= 563GB/s

 

aint my fault that this also wors for the 360edram

 

4macros*500mhz*1024bits/(8bits*1000)=256gb/s

 

thats what there is, we already knew that wii u had plenty of baNndiwth, that wasnt a mistery, the mistery was the number and since sony was aiming for 1 terabyte of bandwidth with edram, dont see why 563GB/s is so difficult to undesrtansd