Quantcast
Locked: Wii U's eDRAM stronger than given credit?

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

ctually no, the calculations are based on the same calculations for the 360 256GB/s, only difference is you got 8 macros isntead of 4 and 50 more mhz

 

so

4macros*500mhz*1024bits/(8bits*1000)=256GB/s

 

as we can see in the wii u photo from chipworks we have 8 macros, so,

8macros*550mhz*1024bits/(8bits*1000)=563.2GB/s

 

74GB/s aint enough for quick ports to work, we are talking about 74GB/s for the whlole edram, so sinc quick ports like call of duty ghosts dont use the full edram but just the 10mb portion coming from the 360 code that means you get 1/3 of that and the game wouldnt even run or work



Around the Network
megafenix said:

ctually no, the calculations are based on the same calculations for the 360 256GB/s, only difference is you got 8 macros isntead of 4 and 50 more mhz

 

so

4macros*500mhz*1024bits/(8bits*1000)=256GB/s

 

as we can see in the wii u photo from chipworks we have 8 macros, so,

8macros*550mhz*1024bits/(8bits*1000)=563.2GB/s

 

74GB/s aint enough for quick ports to work, we are talking about 74GB/s for the whlole edram, so sinc quick ports like call of duty ghosts dont use the full edram but just the 10mb portion coming from the 360 code that means you get 1/3 of that and the game wouldnt even run or work

This makes no sense but it's not a surprise due to the fact you didn't know how tessellation worked ...



fatslob-:O said:
Darc Requiem said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....


I thought something was fishy. According to his calculations wouldn't the bandwith of the Wii U's EDRAM be 74GB/s ?

It is but when you have liars like megafenix spreading misinformation to the ill informed this happens ...

 

liers ike youself right?

t official source abput the 256gb/s besides the ones i mentiond?

how about someone from micro itself?

http://www.cis.upenn.edu/~milom/cis501-Fall08/papers/xbox-system.pdf

 

when we have trollers like you this place stinks



Are people STILL seriously going on about the WiiU's power?

It's ridiculous, the WiiU can pump out some nice looking graphics for the the hardware it has, just like the Wii and the Gamecube before it, but it pales in comparison to what both the xbox one and PS4 can do, and this is a simple fact.

So what if it's EDram has this supposed 500GB/s bandwidth? Hell it can have 1TB or even 2TB bandwidth and you know what? it won't change the simple fact that the GPU is weak as shit. So while it won't be bottlenecked by bandwidth, the bottleneck would be the GPU and CPU.

So going with Nintendo's usual efficient and cost effective designs, What's more reasonable, that Nintendo put in an unbalanced ultra high bandwidth EDram that the system could never actually use, or that they put in EDram that is suitable and balanced for the systems over spec?

I would say that Nintendo system engineers were sensible enough to put in EDram with the rumored ~60-80GB/s, as this would be cost effective but also more than enough for a system with a low specced GPU and CPU.

This does NOT mean the WiiU isn't capable of great graphics, it just means that developers have to push artstyle in their graphics rather than realism. This is something Nintendo games are all about, great artstyles which is why Mario word 3D, Super smash brothers, and mario kart look great.

I just think it's time these Nintendo hardware worriers STOP, open their eyes and realize the WiiU is not in any fashion a powerful machine, accept this fact, and then go forth and enjoy the still gorgeous games that his box can actually produce, rather than go on these ridiculous tangents of insanity...just saying.



fatslob-:O said:
megafenix said:

ctually no, the calculations are based on the same calculations for the 360 256GB/s, only difference is you got 8 macros isntead of 4 and 50 more mhz

 

so

4macros*500mhz*1024bits/(8bits*1000)=256GB/s

 

as we can see in the wii u photo from chipworks we have 8 macros, so,

8macros*550mhz*1024bits/(8bits*1000)=563.2GB/s

 

74GB/s aint enough for quick ports to work, we are talking about 74GB/s for the whlole edram, so sinc quick ports like call of duty ghosts dont use the full edram but just the 10mb portion coming from the 360 code that means you get 1/3 of that and the game wouldnt even run or work

This makes no sense but it's not a surprise due to the fact you didn't know how tessellation worked ...


makes more sense than your trolling, think

what its a port?

its something that comes from a different system to another

 

from where it comes?

360

 

how much edram 360 has?

10MB

 

so when porting directly to wii u will the res od the 22 MB will be maagically used for framebuffer and other stuf?

no, you have to rework the code

 

did they rework it for wii u ghosts?

must likely few parts but not the edram cause eventhough has enough for 720p it was sub hd just like the xbox 360 version

 

thats all logic you need, the answer is infront of you

 

 

i dont een have to do math to see 74GB/s aint enough for a port from 360 to work

 

come on 360 edram+rops = 256gb/s

gpu to edram 32GB/s

 

even accounting the bottleneck the 74GB/s doesnt work since 74 is only 2.3 times faster than 32GB/s but 3.46 times slower than 256gb/s, and since we talk about ports and only a fraction of that edram is used(already shinen explained that only yu need 7MB for 720p so obviously those 7MB should give bandwidth near the xbox 360 abndwidth) it works even less. hell just looking at bayo 2 which has better graphics that the xbox 360 and ps3 bayo1 and also has steady 60fps and lots of materials and ilumination and other things proeves the point



Around the Network
megafenix said:
fatslob-:O said:
Darc Requiem said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....


I thought something was fishy. According to his calculations wouldn't the bandwith of the Wii U's EDRAM be 74GB/s ?

It is but when you have liars like megafenix spreading misinformation to the ill informed this happens ...

 

liers ike youself right?

t official source abput the 256gb/s besides the ones i mentiond?

how about someone from micro itself?

http://www.cis.upenn.edu/~milom/cis501-Fall08/papers/xbox-system.pdf

 

when we have trollers like you this place stink

Like I said, "interconnect bandwidth means jack shit". What part do you not understand ? 



megafenix said:


makes more sense than your trolling, think

what its a port?

its something that comes from a different system to another

 

from where it comes?

360

 

how much edram 360 has?

10MB

 

so when porting directly to wii u will the res od the 22 MB will be maagically used for framebuffer and other stuf?

no, you have to rework the code

 

did they rework it for wii u ghosts?

must likely few parts but not the edram cause eventhough has enough for 720p it was sub hd just like the xbox 360 version

 

thats all logic you need, the answer is infront of you

Oh right, your bullshit about tessellation tells me that your probably just spouting more lies. LOL

Like the boy who cried wolf ...



fatslob-:O said:
megafenix said:
fatslob-:O said:
Darc Requiem said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s....


I thought something was fishy. According to his calculations wouldn't the bandwith of the Wii U's EDRAM be 74GB/s ?

It is but when you have liars like megafenix spreading misinformation to the ill informed this happens ...

 

liers ike youself right?

t official source abput the 256gb/s besides the ones i mentiond?

how about someone from micro itself?

http://www.cis.upenn.edu/~milom/cis501-Fall08/papers/xbox-system.pdf

 

when we have trollers like you this place stink

Like I said, "interconnect bandwidth means jack shit". What part do you not understand ? 

 

bandwidth is important more than you think, if not why would AMD and nvidia increase the internal bandwidth of the gpu  generation?

is imprtant for many things like vertext texture fetches and other things, and i dot say this, its expert people on the topic, like people from nvidia

 

but if you at least read the whole article from gaming blend you will see that he doesnt say this himeself but that consulted people for this

 

want their comments?

here

http://developer.amd.com/wordpress/media/2012/10/Tatarchuk-Tessellation(EG2007).pdf

"

Meshes are Expensive Beasts!

•Want to express more and more details

Complex, rich worlds of recent games require detailed characters

Shading has evolved tremendously in the recent years

•Meshes are a pricey representation

Need to store a lot of data (positions, 

uv

s, animation)

•Vertex transform cost is incurred 

regardless of how close the object is 

to the viewer

Expensive for complex shaders or animated / detailed objects

•Novel GPUs’ unified shader archit

ecture has much more efficient 

geometric processing

But memory storage and fetch bandwidth is still a big concern

Especially for animation 

Large meshes cause less vertex cache reuse

"

 

 

 

We would like to change that now!

Both must manage details for stable performance 

Bring tessellation techniques

 from film in real time 

rendering scenarios

• Takes advantage of highly efficient tessellation HW and 

memory bandwidth of ATI Radeon HD 2000 Series

• Fast displacement mapping and animation

 

"

 

or do you prefer nvidia?



megafenix said:

bandwidth is important more than you think, if not why would AMD and nvidia increase the internal bandwidth of the gpu  generation?

is imprtant for many things like vertext texture fetches and other things, and i dot say this, its expert people on the topic, like people from nvidia

Whatever you do just keep the neverland lies going. LOLOL

Except it was always easy to get higher internal bandwidth. However embedded memory bandwidth is a different story seeing as it DOES NOT serve as a path way for a sea of data. It serves as a temporary secondary storage. If you were sooo right about it's embedded memory bandwidth then why is the eSRAM in the xbox one slower than the WII U's eDRAM despite the fact that the eSRAM has a ton of more transistors for the same memory density ? Care to explain that ? 



fatslob-:O said:
megafenix said:

bandwidth is important more than you think, if not why would AMD and nvidia increase the internal bandwidth of the gpu  generation?

is imprtant for many things like vertext texture fetches and other things, and i dot say this, its expert people on the topic, like people from nvidia

Whatever you do just keep the neverland lies going. LOLOL

Except it was always easy to get higher internal bandwidth. However embedded memory bandwidth is a different story seeing as it DOES NOT serve as a path way for a sea of data. It serves as a temporary secondary storage. If you were sooo right about it's embedded memory bandwidth then why is the eSRAM in the xbox one slower than the WII U's eDRAM despite the fact that the eSRAM has a ton of more transistors for the same memory density ? Care to explain that ? 

slower?

 even if doenst match the bandwichith of the wiiu edram it still rules on latency, which is also imprtanta and we already saw that  on the gamecubevs xbox era,  so at the end of the day both memories might give the same results. Of course that wanst the only reason they changed the approach

 

here, but you have to read

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"

"This controversy is rather surprising to me, especially when you view as ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it," explains Andrew Goossen.

"We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM. Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM.

"The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said, 'gosh, it would sure be nice if an entire render target didn't have to live in eDRAM' and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3, so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go... From my perspective it's very much an evolution and improvement - a big improvement - over the design we had with the Xbox 360. I'm kind of surprised by all this, quite frankly."

"

 

if micro would have gone with edram again they might had more bandwidth, but would have more latency and also wouldnt be able to apply that trick between the embedded memory and the ddr3 ram, plus, esram is more expensive than edram, you need about 4x more transistors, s ofc urse you cant get the same chip comapred to an edram with the same money, but you will still get less latency regardless of that