By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

tanok said:

again, put attention and read

http://www.eurogamer.net/articles/digitalfoundry-ps4-ac4-patch-analysed-in-depth

i am not telling this, is digital foundary and others

the agme was 900p and 30fps and later updated with a patch to 1080p but syill 30fps and only the pc version is 60fps

 

and again read

if the third parties are giving special tratment to wii u, wouldnt zombiu look better?

and its a ground up game so what the..


Indeed. Read. That article states that the PS4 doesn't struggle at all to run the game in 1080p. The fact that it ran in 900p at first is irrelevent. The game runs in 1080p now, locked at 30fps, despite the enhancements. You're creating stuff up as you go. It's a little funny. The Wii U can't even run that game as well as those antiquated PS3 and 360.

You're grasping at straws here. xD



Around the Network
Hynad said:
tanok said:

again, put attention and read

http://www.eurogamer.net/articles/digitalfoundry-ps4-ac4-patch-analysed-in-depth

i am not telling this, is digital foundary and others

the agme was 900p and 30fps and later updated with a patch to 1080p but syill 30fps and only the pc version is 60fps

 

and again read

if the third parties are giving special tratment to wii u, wouldnt zombiu look better?

and its a ground up game so what the..


Indeed. Read. That article states that the PS4 doesn't struggle at all to run the game in 1080p. The fact that it ran in 900p at first is irrelevent. The game runs in 1080p now, locked at 30fps, despite the enhancements. You're creating stuff up as you go. It's a little funny. The Wii U can't even run that game as well as those antiquated PS3 and 360.

You're grasping at straws here. xD


indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

and again read

wii u gpu die size is about 96mm2 and already accounted the edram and other things like dsp and arm cores

redwood is 104mm2 but witha  precise photo like chipwors it could be like 94mm2

 

cant you fit 94mm2 in 96mm2?

redwood has 400 stream cores, 20 tmus and 8 rops, so obviously wii u could have the same cause are both made at 40nm, not to mention that if you remove tmus to have 16 you have sopace for more straem cores(tmus are many times bigger than stream cores) or for other customizations

 

is not just ports we are talking about, is the few third party support and the fact that no one is going to give special treatment to a system with poor sales and marketshare unless nintendo shows off the money

 

that and the examples with a powerful system like ps4 that even if capcable of doing 1080p60fps and the ports only goes 1080p30fps or 900p30fps as it was before the patch, the why wouldt the wii u ahve troubles?

is obvious that you require more powerful system for a port to work, and if it was emulation then even more

the quick ports are just adjusted to work on teh system, no to take profit of all the power behind it, no to mention as the article says wii u portsonly use 33watts, when if we take into account the true wattage peak of wii u thanks to the report of the DOE of USA, then there is like 18 watts left that games dont use, which means like 180gigaflops or so

 

so we have this

lazy ports require more powerful system to even work, at least we can get that from ps4 assesins creed example or the bayoneta game

wii u gpu die size is more than enough to fit 400 stream cores, 20 tmus and 8 rops, compare it to the redwood

wii u power supply is about >=85%efficneicy, which menas that the ports that use 33watts are missing some gigaflops cause if we account the dvd drive, usbs, etc, we still have 18 watts free, and thats like 180gigalfops that ports dont use, and this info comes from the department of energy, is not a rumor, is a fact

 

wii u is not recieving special treatment in must cases, even the zombiu game was lazy and it was a ground up game, so why would a port from assesins creed to wii u be different and coming from the same third party?

 



tanok said:
Hynad said:
tanok said:

again, put attention and read

http://www.eurogamer.net/articles/digitalfoundry-ps4-ac4-patch-analysed-in-depth

i am not telling this, is digital foundary and others

the agme was 900p and 30fps and later updated with a patch to 1080p but syill 30fps and only the pc version is 60fps

 

and again read

if the third parties are giving special tratment to wii u, wouldnt zombiu look better?

and its a ground up game so what the..


Indeed. Read. That article states that the PS4 doesn't struggle at all to run the game in 1080p. The fact that it ran in 900p at first is irrelevent. The game runs in 1080p now, locked at 30fps, despite the enhancements. You're creating stuff up as you go. It's a little funny. The Wii U can't even run that game as well as those antiquated PS3 and 360.

You're grasping at straws here. xD


indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

dude most ports show a 5x jump yet your stuck on the least impressive port, i wonder why?



drkohler said:
hated_individual said:
binary solo said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s.... (you forgot to input that Wii U eDRAM in GPU is not a single macro, it is eight macro's, your calculation/math is correct for a single macro but not all eight of them).

So actually 70.4GB/s then. (per macro)

drkohler's calculation is for a single macro not all eight macro's combined, he forgot to input in his own calculation that Wii U has eight and not one macro.

A proper calculation would have bee; 550MHz times 1024 bit times 8 equals 563.2 GB/s

Good Lord... I actually spent (wasted..) almost an hour googling around on the internet to see where these numbers come from. I found one troll on a forum who got to a 8192bit wide bus and this mystical 560GB/s number. Unfortunately this troll was posting on a technology forum, so he was immediately ridiculed into silence. That "troll" was probably right and there is proof to his theory while nobody presented any proof/evidence against it also if 563.2GB/s bandwidth was mystical then Sony would not have considered an option to have a bandwidth of 1 TB/s for PlayStation 4 if they used eDRAM/eSRAM.

Apparently "macro" is the new buzzword now that magically creates big numbers. 
Since when a word "macro" is a buzzword?? Because it is not a buzzword and since when it apparently magically creates big numbers?

I have no idea what the expression "macro" means here, It is not an expression, it is a word used involving eDRAM since meaning of "macro" is basically a blocks. One macro is a block.

 

so would any of you "macropeople" answer the following questions:

1. Do you know how much die space and how much power memory controlers require for driving a 8192bit bus? - assuming 32 (!) proprietary 256bit controlers or 128 (!!) conventional 64bit controlers at 40 (45?)nm process nodes? If you got the same number as me, didn't it raise a flag in your brain, just by looking how large the entire gpu die actually is?

Do you know that we aren't talking about 8192bit bus at all, we are talking about 8 buses each having being 1024bit not a single 8192bit bus. @underlined It would have, a decade ago and this didn't stop Xbox 360 with internal bandwidth with each of 5 2MB macros having 1024bit bus in 2005.

2. Where on the die shot are all those memory controlers? Ho many metal layers would you find to connect all of this (Hint: the Jaguar cpu only needs 11, and it has a measly 256bit bus)?

Why you ask me, you can't see? Also CPU=/=GPU.

3. Why on earth would Microsoft go with a measly 109GB/s bus in the XBox One apu managed cache, if they could easily achieve twenty times the bandwidth with this magic WiiU technology?

Since when something possible is magic, don't make me laugh. I guess you are not competent enough to answer your own questions, I don't know if I should laugh at you or pity you for not being able to answer simple and for you being narrow minded.  Microsoft choose (2x512bit?)eSRAM with "measly" 109GB/s of bandwidth because it was either good enough for them or to lower percentage of defective yields/chips rather than use more complex/expensive eDRAM. They balanced on both cost and percentage of successful chips plus eDRAM is risky on 28nm node and that is why you don't see any producting using it and Intel is only manufacturer that makes chips with eDRAM on 22nm. 

40nm is now mature and well developed node while chip/fabrication manufacturers are "just now" at 28nm while Intel is leading one with 22nm for couple of years and has struggles with 14nm node. The lower node the higher is chances of a defective/failed chip.

4. Why would Nintendo engineers create a cache system that is way, way, way, way, way, too fast for the cpu/gpu system?

Nothing is too fast and you could easily come up with an answer on your on. First of all Shin'en confirmed that in Wii U the CPU can access and use eDRAM in GPU as a "scratchpad memory" thus it could also be used as an L3 Cache(?) plus considerable benefits for performance of a GPU, twice the bandwidth can benefit up to 30-40% higher performance.

So after we have correctly pondered over all those minor (lol) questions, the result can be summarised as follows: 550MHz * 128Byte/s = 70GB/s is the WiiU edram bandwidth speed. For a single macro/block...



tanok said:

indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

You really have no idea what you're talking about. xD

I think I'll give up right here before I lose my sanity. I'm having a nice day so far, and you're certainly not worth the effort. xD



Around the Network
Hynad said:
Samus Aran said:
Hynad said:
Samus Aran said:

So you know how much effort developers put into PS4/Wii U ports? 

 

Wait... That question work only for people who think the Wii U isn't as powerful as you believe? Do you know how much effort went into the Wii U's version of Arkham City?

Nope. You don't.

 

Works both way. ¬_¬


You should read some interviews of the port of Darkside 2 and you'll know how much effort they put into these ports: a couple of hours at most. 


Oh my... Are you for real? They managed to have a build ready for that Wii U E3 reveal, but they didn't take only a couple hours to port the full game. They asked THQ to give them more time to polish the game, and they got two extra months for that for Pete's sake!

Seriously, you have to be joking.


Sorry, I mean it only took them a couple of hours to put the core game on the Wii U. These ports barely took any time. 



starworld said:
tanok said:
Hynad said:
tanok said:

again, put attention and read

http://www.eurogamer.net/articles/digitalfoundry-ps4-ac4-patch-analysed-in-depth

i am not telling this, is digital foundary and others

the agme was 900p and 30fps and later updated with a patch to 1080p but syill 30fps and only the pc version is 60fps

 

and again read

if the third parties are giving special tratment to wii u, wouldnt zombiu look better?

and its a ground up game so what the..


Indeed. Read. That article states that the PS4 doesn't struggle at all to run the game in 1080p. The fact that it ran in 900p at first is irrelevent. The game runs in 1080p now, locked at 30fps, despite the enhancements. You're creating stuff up as you go. It's a little funny. The Wii U can't even run that game as well as those antiquated PS3 and 360.

You're grasping at straws here. xD


indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

dude most ports show a 5x jump yet your stuck on the least impressive port, i wonder why?


not aying the contrary, and if you apply that to wii u then out of those 176gigaflops they speculaye, how much is used then?

and ports still work and run despite using an engine that hant been optimized for wii u and on top of that are lazy ports using directx9 hardware specs?

nope, thans for the comment now the fact that 176gigaflops is impossible is more believeable



hated_individual said:
drkohler said:
hated_individual said:
binary solo said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s.... (you forgot to input that Wii U eDRAM in GPU is not a single macro, it is eight macro's, your calculation/math is correct for a single macro but not all eight of them).

So actually 70.4GB/s then. (per macro)

drkohler's calculation is for a single macro not all eight macro's combined, he forgot to input in his own calculation that Wii U has eight and not one macro.

A proper calculation would have bee; 550MHz times 1024 bit times 8 equals 563.2 GB/s

Good Lord... I actually spent (wasted..) almost an hour googling around on the internet to see where these numbers come from. I found one troll on a forum who got to a 8192bit wide bus and this mystical 560GB/s number. Unfortunately this troll was posting on a technology forum, so he was immediately ridiculed into silence. That "troll" was probably right and there is proof to his theory while nobody presented any proof/evidence against it also if 563.2GB/s bandwidth was mystical then Sony would not have considered an option to have a bandwidth of 1 TB/s for PlayStation 4 if they used eDRAM/eSRAM.

Apparently "macro" is the new buzzword now that magically creates big numbers. 
Since when a word "macro" is a buzzword?? Because it is not a buzzword and since when it apparently magically creates big numbers?

I have no idea what the expression "macro" means here, It is not an expression, it is a word used involving eDRAM since meaning of "macro" is basically a blocks. One macro is a block.

 

so would any of you "macropeople" answer the following questions:

1. Do you know how much die space and how much power memory controlers require for driving a 8192bit bus? - assuming 32 (!) proprietary 256bit controlers or 128 (!!) conventional 64bit controlers at 40 (45?)nm process nodes? If you got the same number as me, didn't it raise a flag in your brain, just by looking how large the entire gpu die actually is?

Do you know that we aren't talking about 8192bit bus at all, we are talking about 8 buses each having being 1024bit not a single 8192bit bus. @underlined It would have, a decade ago and this didn't stop Xbox 360 with internal bandwidth with each of 5 2MB macros having 1024bit bus in 2005.

2. Where on the die shot are all those memory controlers? Ho many metal layers would you find to connect all of this (Hint: the Jaguar cpu only needs 11, and it has a measly 256bit bus)?

Why you ask me, you can't see? Also CPU=/=GPU.

3. Why on earth would Microsoft go with a measly 109GB/s bus in the XBox One apu managed cache, if they could easily achieve twenty times the bandwidth with this magic WiiU technology?

Since when something possible is magic, don't make me laugh. I guess you are not competent enough to answer your own questions, I don't know if I should laugh at you or pity you for not being able to answer simple and for you being narrow minded.  Microsoft choose (2x512bit?)eSRAM with "measly" 109GB/s of bandwidth because it was either good enough for them or to lower percentage of defective yields/chips rather than use more complex/expensive eDRAM. They balanced on both cost and percentage of successful chips plus eDRAM is risky on 28nm node and that is why you don't see any producting using it and Intel is only manufacturer that makes chips with eDRAM on 22nm. 

40nm is now mature and well developed node while chip/fabrication manufacturers are "just now" at 28nm while Intel is leading one with 22nm for couple of years and has struggles with 14nm node. The lower node the higher is chances of a defective/failed chip.

4. Why would Nintendo engineers create a cache system that is way, way, way, way, way, too fast for the cpu/gpu system?

Nothing is too fast and you could easily come up with an answer on your on. First of all Shin'en confirmed that in Wii U the CPU can access and use eDRAM in GPU as a "scratchpad memory" thus it could also be used as an L3 Cache(?) plus considerable benefits for performance of a GPU, twice the bandwidth can benefit up to 30-40% higher performance.

So after we have correctly pondered over all those minor (lol) questions, the result can be summarised as follows: 550MHz * 128Byte/s = 70GB/s is the WiiU edram bandwidth speed. For a single macro/block...


sorry, wrong guy

as for why micro went with esram instead of sticking with edram well that has been exposed already

here

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"

"This controversy is rather surprising to me, especially when you view as ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it," explains Andrew Goossen.

"We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM. Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM.

"The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said'gosh, it would sure be nice if an entire render target didn't have to live in eDRAM' and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3, so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go... From my perspective it's very much an evolution and improvement - a big improvement - over the design we had with the Xbox 360. I'm kind of surprised by all this, quite frankly."

"

 

basically they went with esram so that they could flow data into the ddr3 ram

but of course there is a trade off

esram is way more expensive than edram, you require about 4x to 6x more transistors depending on the design, so obviously if micro would have gone edram again they could have afforded for edram with huge bandwidth, but since they wanted to use that trick of micing esram and ddr3 well some sacrifices had to be done, but 200GB/s and fully integrated in the gpu and not having it in a separerate die and also the adventages of almost no latency and of course that trick that mico wanted to implement seems the right choice

 

as for instance, 360 edram 256GB/s were only accessible by the ROPS cause the edram and ROPS were in the same die chip, but later the result that had to be passed to the GPU framebuffer had be done trough a extrnal bus of 32GB/s. Now there is no external bus on xbox one or wii u, the edram is completely accessible to all the GPU, not just the ROPS

 

at the edn of the day, those 200GB/s with less latency could perform as good as the 563GB/s (or more cuase the example uses the old 1024bits design from nec 7 years ago) if well used, and also has the adventage of mixing the compoennt with ddr3 could maybe outperform it at some degree



Hynad said:
tanok said:

indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

You really have no idea what you're talking about. xD

I think I'll give up right here before I lose my sanity. I'm having a nice day so far, and you're certainly not worth the effort. xD

indeed most ps4 ports are showing a 5x jump, even ac4 does during multiplayer, and these are on launch day, wiiu even after a year continues to struggle matching 360/ps3 games we dont even have proof of one game showing a 2x jump



Hynad said:
tanok said:

indeed, read, you have 7.5x more powwr but cant do something that requires only half of that

read

You really have no idea what you're talking about. xD

I think I'll give up right here before I lose my sanity. I'm having a nice day so far, and you're certainly not worth the effort. xD

 

you have even less idea, at least i provide sources and examples, what do you provide besides questioning others?

think is your turn to answer

if ports work oin wii u despite the system isnt maxed out and forced to run the game with the parameters specified by the older harwdare and so the new tricks on wii u like compute shaders and other stuff arent used, then how come it can run the games with less power when the ps4 needs more power?

 

or even better, more simple question

if the one that receives the port doesnt require more power in order to run the game from the other platform, then why ps3 bayoneta port perfromed and looked worse than the 360 vesrion?

isnt ps3 about 1.5x more powerful?

didnt ps3 run at unstable 30fps and xbox 360 near the 50 to 60?

werent both platforms out there for many years?

 

has been wii u out there eough time to max it out?

if not then how can a system weaker than the other and not familiar to developers even run a game with less power than the counterpart?

 

really you expect a weaker system to perform better or equal as the others in a lazy ports when we already have many examples from both previous generation and current generation that tell the contrary?

you expect special treatment for a system where must third party developers are not interested in spending money due to the few sales?

you say assesins creed 3 for wii ur eceived special treatment so it coul run the port from 360 with less power yest zombieu which was a groundup game was lazy and with unimpressive graphics?

 

seriously with all those examples is nit hard to understand, is obvious that no matter if nintendo says te contrary you will still be saying that is the other way around

 

certainly you are not worh the effort, no matter what is proved you are just gonna stick with your thinking no matter what, the worse thing is that you dint eveut  have material to deffend you arguments, you just amke them and thats all but excpect thers to present proves and explain themselves

 

you have stolen the words of my mouth, certainly you are not worth the effort

already my information has debunked your arguments

and no its not my word

its examples from digial foundary

commets from ign

developers from trine 2 and shinen

facts like the die size of the wii u gpu nough for 400 stream cores, 20 tmus an 8 rops

examples that lazy ports always require a more powerful system to eveb work when they are lazy of course and is not just assesins creed, its also bayonetta 1 from the previous gen

 

seriously, all you do is argue and talk, at elast have the dignity of offering prove and sources, not just comments

 

srosuly this shouldnt even be a debate, i am not saying is stronger, developers ar the ones

you have shinen

those from trine two

the ones who made darksiders 2 and said wii u was at least on par with 360 and ps3

so we have the comments from crytek as well

and lets not forget that ibvstigation made on 2011 when many develoeprs were asked how powercul wii u was and must of them said was about 50% more powerful than 360 and ps3 and thats was back in 2011 when the developers only had access to the early development kit, the final one came out a little before mid of 2012

 

is simple, wii u is mor powerful, there s no quesion about it and the ps4 example with the assesins creed port and of course the example of bayoneta for ps3 and xbox 360 prove that a quick port is nt going to perform well on the platform you port too if you dont rework the code and optimize it, which even if done if not done properly will perform worse thn the original platform it came from, so obviosuly if wii u was weaker it couldnt even run quick ports, ps3 was stronger than 360 and still bayoneta was the sorst version on ps3 since it was a port