By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

Hynad said:
megafenix said:
Hynad said:

Shin'en is the last example to use in any of those arguments. They've always developed exclusively for Nintendo hardware and have no knowledge of the other platforms.


so its beter to listen to an unkown guy of neogaf that even said at ign that green hills and at software were not related at all?

and i can prove that

 

that guy is an obvious troll, 176gigaflops wont do any magic i your ports no matter how efficient the system is cause is a port not a ground up game, and also was made by developers unfamiliar with the new system using early dev tools and engines not optimized for the new hardware, so if even the new consoles that also are more modern and far surpass the power of the 360 and ps3 by 5 to 7x, then why the ports are so bad and dont hit the 1080p60fps and even one of them like call of duty ghists is just 720p?

 

and its not just shinen, we also have many others, like the guys of armillo, the ones who made the giana sister game, etc

You keep going back to that same old trench everytime you can't come up with any valid argument.

The PS4 doesn't struggle to run those ports, unlike what you believe. Those cross-gen games run with improved visuals, higher native resolution, higher framerate, improved and/or added vfx, in many cases they include ambiant occlusion, different lighting systems, higher-res textures, improved geometry, etc...

You really don't seem to understand the words you're copy pasting.


dont know why you bring ps3 when was talking about ps4 and xbox 360, but now that you bing it then could you please explain us why the version of bayonetta on ps3 was worse that the 360 version?

why half the framerate?

isnt the ps3 capable of things like last of us?

 

then what happened in bayonetta for ps3?

 

remeber, your answer also applies for other cases, like wiiu

 

ps4 improved visuals ha?

like assesins creed 4?

cause that game looks very similar in wii u, but besides that was just running at 900p 30fps eventhough ps4 was more modern, more efficient and besides that a lot ore powerful in paper, like 750%, so why just 900p30fps hen 360 oes i at 720p 30fps with old tech and 750% less power?

even wth the update was only 1080p30fps, wht the fu-...

shouldnt be capable of 60fps?

why just pc?

 

and if ps4 cant even do something that requires just about 300% the power of the 360 to do it at 1080p60fps, why would you expect wii u to do better with half the power of 360 and be capable of better graphics and the same res and framerate when numbers tell that ps4 requires lmore than 2x of 360 to render it at the same res and framerate in this port of assesins creed?

 

and thats not all, is nt just ports that tell this is also

wi u die size enough to hold 400stream cores, 20tmus and 8 rops like the redwood xt

deeoeprs that said wii u was 50% more powerful than ps3 and xbox 360 based in the early dev kit

comments from many developers, not just shinen, bt many others

the report of chipworks that wii u had 320 stream cores, 16tmus and 8 rops and with half the gpu undetermined, and later changed by neogaf not them by saying that there were interpolators there, and the funny thing is that tmus supposedly were actually nterpolators

 

really?

do tmus really resemble interpolators to be confused in the first place?

 

isnt nintendo about performance?

then wy keep the 2008 old interpolators when AMD said back in 2009 that they removed them for the hd5000 seires cause were causing the rv770 to udnerperform and decided to leave the interpolation to the stream cores?

hd5000 and hd4000 arent that different, architecture is vastly the same with minor changes and upgrades, but one of the biggest changes was the removal of interpolators and addition of irectx11 support. Rmeo the speculated interpolator on neogaf and tell me how much stream cores you would get for wii u?

and is not just candle developers that revealed that wii u is capable of directx11 features or shader model 5 with their custom api, we also have the giana sisters and armillo developers and many others



Around the Network
megafenix said:

then wy keep the 2008 old interpolators when AMD said back in 2009 that they removed them for the hd5000 seires cause were causing the rv770 to udnerperform and decided to leave the interpolation to the stream cores?

hd5000 and hd4000 arent that different, architecture is vastly the same with minor changes and upgrades, but one of the biggest changes was the removal of interpolators and addition of irectx11 support. Rmeo the speculated interpolator on neogaf and tell me how much stream cores you would get for wii u?

and is not just candle developers that revealed that wii u is capable of directx11 features or shader model 5 with their custom api, we also have the giana sisters and armillo developers and many others


The interpolators didn't under-perform.
They were a waste of die-space that could be better used for something else.
The Radeon 4000 series of GPU's were stupidly potent per transister as nVidia quickly found out.

The Interpolator was moved to be performed on the shaders, again one out of efficiency as transisters spent on that fixed function unit could be better utilised for more shaders that handle all sorts of data sets.

megafenix said:

The rumor comes from this NeoGAF post examining the system’s base specs, which has not been publicly disclosed by Nintendo, but is speculated here at 160 ALUs, 8 TMUs, and 8 ROPs. The poster then tried to collect information from older interviews to clarify how they got to these specs. Taking tidbits from another forum thread about earlier Wii U prototypes, an interview with Vigil Games, and the Iwata Asks on the Wii U, the poster came to the conclusion that Nintendo must have lowered the system’s power specs to fit a smaller case.

Unfortunately, considering the process of making launch games for new consoles is a secretive and difficult process, I think there is too little information publicly available to really make these kinds of calls. Even if we take into account developers breaking NDAs, each group involved with a system launch really only has a limited amount of information. Developer kits themselves are expected to get tweaked as the specs get finalized, and the only people who would really know are too far up in the industry to say anything.


So by your own admission the specs you come up with are also not disclosed by Nintendo and should also be taken with a truck full of salt, good to know.

megafenix said:
sorry dude, ps4 and xbox one are also more efficnet and more modern and despite that cant even do 1080p 60fps in their ports even with all those adventages and even surpassing the raw power required for that, so why would you expect wii u to do any better with 160 stream cores?


Lets take a look at all the multiplatforms then shall we?

1) Battlefield 4. - Xbox One and Playstation 4 doing "High" equivalent PC settings, rather than Ultra. (WiiU?)
2) Call of Duty Ghosts. - Runs on a toaster, minor visual improvements on the next-gen twins and the PC, mostly just superficial, to be expected with such short development cycles and iterative release schedule in order to take as much money from the consumer as possible with the smallest amount of expenditure.
3) Assassins Creed: Black Flag. - Minor improvements, mostly just matches the PC version.

Xbox 360/Playstation 3 did "low" equivalent settings at 720P@30fps in Battlefield 4.
There is a stupidly large *night and day* difference, that only PC and next generation console games can genuinely appreciate, to think that you don't think there is a next gen difference is pretty telling.

megafenix said:

wii u die size also is like 96mm2 even without edram and other stuff and thats enough to hold 400 stream cores, 20 tmus and 8 rops just like the redwood xt which could be like 94mm2 with a chipworks photo instead of the 104mm2 reported by anadtech cause the wii u gpu die size was reduced from 156mm2 reported from anadtech to 146mm2 by chipworks photo

all that efficiency goes to waste on a port, already shinen explained that your engine layout has to be different for wii u or you lose a lot performance hit, emaning that your 176gigaflops also go even lower due to this

Wii U's GPU "Latte" is 146.48 mm2, 85 mm2 is what is purely available for GPU.

What makes up the rest of the die? Well. That's simple.
There is a possibility that a Dual-Core ARM A8 processor exists on the die for OS/Background/Security, an ARM7 core for instance should consume roughly 10-20 million transisters per core.
You then have an additional ARM A9 core for backwards compatability.
Then an additional ARM DSP.
And don't forget the eDRAM.
Then you have a few megabytes as SRAM to be used as a cache.
Even more eDRAM for Wii backwards compatability.
Then you need memory controllers and other tid-bits for interopearability.

Yet you seem to think that somehow magically they can fit in 400 (Old and inefficient) VLIW5 stream processors?
Heck even chipworks+neogaf doesn't even think it has that much, they think it's roughly 160-320.

However lets be realistic, the numbers you are grabbing? There is no logic, there is no solid source, heck some of the source information you post has ZERO relevance to the topic at hand.

However to put things in perspective, the Latte GPU is of an unknown design, that's 100% certain, we do knows it's based on a Radeon 5000/6000 architecture, but that's about it.

There are 8 logic blocks or SPU's. For VLIW5 there are usually 20-40 ALU's per block.
If it was based upon VLIW4 (Unlikely!) then there is the potential for 32 ALU's.
Thus 8*20 = 160.
And 8*40 = 320.
And 8*32 = 256.

These are the most logical and make the most sense any sane person would go for, why? Well. History, AMD spent allot of years refining VLIW it knows what and what does not work and what is the most efficient.

There is an absolutely tiny and I mean *tiny* chance that the WiiU is using more ALUs in every SPU, let me be clear, you won't be getting 400 stream processors.

Thus if it had 160 Stream processors you would have 176Gflops.
If it had 320 Stream processors you would have 352Gflops.
If it had 256 VLIW4 Stream processors you would have 282Gflops.
With the VLIW4(256 Stream processors) probably being faster than the VLIW5(320 Stream processors) due to efficiencies.

Red are the GPU blocks because they're a repeating patternt hat's all the same, blue is likely the back-end.


Also don't forget either that when you are building a die, you need to set aside a significant % of transisters for redundancy so you can obtain maximum yield out of a die to replace any parts that are faulty. (Examples being the Cell processor with a core disabled.)


megafenix said:
All these things  point out that 176gigaflops or just 160 stream processors wouldnt be enough for a lazy port to even work on wiiu, hell even one of the secret developers admitted that despite wiiu having the capability of compute shaders with an early dev kit they didnt use the feature, that much tells you how lazy they are being so obviuosly all tht perfromance and modern architecture is going to waste when you port from an older system in must of the cases, and we expect the console to do miracles witth just 160 stream cores when not even xbox one and ps4 can do 1080p 60fps from ports of the older generation eventhough being more modern, efficient and also exceeding the raw power needed for that?, of course not. And its not just the problem that comes when porting or that developers are lazy, but also the fact that the development tools and engines were in their baby steps when the first wii u ports came out, and still the games worked fine, so summing all that up and the examples of xbox one and ps4 ports and the ps3 worst version of bayonetta due to being a port also point this out

 

its not only bayo 1 for ps3 being bad for being a port comapred to 360 and having half the framerate, its not just ps4 assesins creed 4 version which despite ps4 being more modern, efficent and easier to developr for cant run it at 1080p60fps and was running at just 900p30fps before the patch of 1080p30fps despite having all the adevnatge of modern hardware and being 750% more powerful than 360 in paper, we also have many other examples, ave you forgotten call of duty ghosts for xbox one? and xbox one is more modern, more effient and also 500% more powerul in paper, so why its running at just 720p?

 

 

I had a Radeon 6450 once which has 160 Stream processors, you would be surprised how well it runs console ports at 720P with low settings, even Assassins Creed 4. (Thankfully I replaced that POS with a Radeon 6570 in the Core 2 rig.)
But don't take my word for it...
http://www.youtube.com/watch?v=yFOrCJxnyDo

Remember the Next-Gen twins are running games graphically superior to the WiiU in every single way, they're in completely different leagues.
What you're suggesting is essentially that the difference between a WiiU and the fastest PC you can build has no performance delta because both are only running at 720P. (The Audacity!)
Since when has resolution been a defining factor around this forum in regards to hardware potency and graphics? I mean seriously? It's all but one minor part of a stupidly large puzzle that defines the graphical fidelity of a scene that's showcased on your screen.

megafenix said:

shouldnt the cache help on the framerate?, cause 360 has only 1MB and wii u has 3, so even if its slower, if the develoeprs take profit of the extra 2MB of cache there should be no framerate issues, plus one of the secret develpers at eurogamer admitted that they were being lazy, like saying that compute shaders were there but they didnt bother using them, sum that to the fact that 360 uses one core for sound, so obviously the port on wii u will do instead of using the DPS if the developers dont put effort into changing the code, not just adapting it, your geomety shaders also go to waste cause 360 doenst support them and thus the port will not do unless the deveopers change it, but they didnt bither with the compute shaders, why would they with the geometry shaders and other new stuff for better performance?

etc

No. Cache does NO form of processing, it just stores information, like a hard drive, just orders of magnitudes faster.

As for geometry shaders and compute shaders the Xbox 360 can deal with those, albeit it's extremely limited, it would probably translate well over to Latte as Latte is several evolutionary steps from the Xbox 360's GPU.
However, the other little tidbits of the hardware may have had an effect on things.


Anyhow, I know this is a long stretch, but do try to listen to reason instead of falling on your disproved ideas and numbers.



--::{PC Gaming Master Race}::--

Really, who cares anymore? If the games look good they look good.



Any message from Faxanadu is written in good faith but shall neither be binding nor construed as constituting a commitment by Faxanadu except where provided for in a written agreement signed by an authorized representative of Faxanadu. This message is intended for the use of the forum members only.

The views expressed here may be personal and/or offensive and are not necessarily the views of Faxanadu.

Pemalite said:
megafenix said:

then wy keep the 2008 old interpolators when AMD said back in 2009 that they removed them for the hd5000 seires cause were causing the rv770 to udnerperform and decided to leave the interpolation to the stream cores?

hd5000 and hd4000 arent that different, architecture is vastly the same with minor changes and upgrades, but one of the biggest changes was the removal of interpolators and addition of irectx11 support. Rmeo the speculated interpolator on neogaf and tell me how much stream cores you would get for wii u?

and is not just candle developers that revealed that wii u is capable of directx11 features or shader model 5 with their custom api, we also have the giana sisters and armillo developers and many others


The interpolators didn't under-perform.
They were a waste of die-space that could be better used for something else.
The Radeon 4000 series of GPU's were stupidly potent per transister as nVidia quickly found out.

The Interpolator was moved to be performed on the shaders, again one out of efficiency as transisters spent on that fixed function unit could be better utilised for more shaders that handle all sorts of data sets.

megafenix said:

The rumor comes from this NeoGAF post examining the system’s base specs, which has not been publicly disclosed by Nintendo, but is speculated here at 160 ALUs, 8 TMUs, and 8 ROPs. The poster then tried to collect information from older interviews to clarify how they got to these specs. Taking tidbits from another forum thread about earlier Wii U prototypes, an interview with Vigil Games, and the Iwata Asks on the Wii U, the poster came to the conclusion that Nintendo must have lowered the system’s power specs to fit a smaller case.

Unfortunately, considering the process of making launch games for new consoles is a secretive and difficult process, I think there is too little information publicly available to really make these kinds of calls. Even if we take into account developers breaking NDAs, each group involved with a system launch really only has a limited amount of information. Developer kits themselves are expected to get tweaked as the specs get finalized, and the only people who would really know are too far up in the industry to say anything.


So by your own admission the specs you come up with are also not disclosed by Nintendo and should also be taken with a truck full of salt, good to know.

megafenix said:
sorry dude, ps4 and xbox one are also more efficnet and more modern and despite that cant even do 1080p 60fps in their ports even with all those adventages and even surpassing the raw power required for that, so why would you expect wii u to do any better with 160 stream cores?


Lets take a look at all the multiplatforms then shall we?

1) Battlefield 4. - Xbox One and Playstation 4 doing "High" equivalent PC settings, rather than Ultra. (WiiU?)
2) Call of Duty Ghosts. - Runs on a toaster, minor visual improvements on the next-gen twins and the PC, mostly just superficial, to be expected with such short development cycles and iterative release schedule in order to take as much money from the consumer as possible with the smallest amount of expenditure.
3) Assassins Creed: Black Flag. - Minor improvements, mostly just matches the PC version.

Xbox 360/Playstation 3 did "low" equivalent settings at 720P@30fps in Battlefield 4.
There is a stupidly large *night and day* difference, that only PC and next generation console games can genuinely appreciate, to think that you don't think there is a next gen difference is pretty telling.

megafenix said:

wii u die size also is like 96mm2 even without edram and other stuff and thats enough to hold 400 stream cores, 20 tmus and 8 rops just like the redwood xt which could be like 94mm2 with a chipworks photo instead of the 104mm2 reported by anadtech cause the wii u gpu die size was reduced from 156mm2 reported from anadtech to 146mm2 by chipworks photo

all that efficiency goes to waste on a port, already shinen explained that your engine layout has to be different for wii u or you lose a lot performance hit, emaning that your 176gigaflops also go even lower due to this

Wii U's GPU "Latte" is 146.48 mm2, 85 mm2 is what is purely available for GPU.

What makes up the rest of the die? Well. That's simple.
There is a possibility that a Dual-Core ARM A8 processor exists on the die for OS/Background/Security, an ARM7 core for instance should consume roughly 10-20 million transisters per core.
You then have an additional ARM A9 core for backwards compatability.
Then an additional ARM DSP.
And don't forget the eDRAM.
Then you have a few megabytes as SRAM to be used as a cache.
Even more eDRAM for Wii backwards compatability.
Then you need memory controllers and other tid-bits for interopearability.

Yet you seem to think that somehow magically they can fit in 400 (Old and inefficient) VLIW5 stream processors?
Heck even chipworks+neogaf doesn't even think it has that much, they think it's roughly 160-320.

However lets be realistic, the numbers you are grabbing? There is no logic, there is no solid source, heck some of the source information you post has ZERO relevance to the topic at hand.

However to put things in perspective, the Latte GPU is of an unknown design, that's 100% certain, we do knows it's based on a Radeon 5000/6000 architecture, but that's about it.

There are 8 logic blocks or SPU's. For VLIW5 there are usually 20-40 ALU's per block.
If it was based upon VLIW4 (Unlikely!) then there is the potential for 32 ALU's.
Thus 8*20 = 160.
And 8*40 = 320.
And 8*32 = 256.

These are the most logical and make the most sense any sane person would go for, why? Well. History, AMD spent allot of years refining VLIW it knows what and what does not work and what is the most efficient.

There is an absolutely tiny and I mean *tiny* chance that the WiiU is using more ALUs in every SPU, let me be clear, you won't be getting 400 stream processors.

Thus if it had 160 Stream processors you would have 176Gflops.
If it had 320 Stream processors you would have 352Gflops.
If it had 256 VLIW4 Stream processors you would have 282Gflops.
With the VLIW4(256 Stream processors) probably being faster than the VLIW5(320 Stream processors) due to efficiencies.

Red are the GPU blocks because they're a repeating patternt hat's all the same, blue is likely the back-end.


Also don't forget either that when you are building a die, you need to set aside a significant % of transisters for redundancy so you can obtain maximum yield out of a die to replace any parts that are faulty. (Examples being the Cell processor with a core disabled.)


megafenix said:
All these things  point out that 176gigaflops or just 160 stream processors wouldnt be enough for a lazy port to even work on wiiu, hell even one of the secret developers admitted that despite wiiu having the capability of compute shaders with an early dev kit they didnt use the feature, that much tells you how lazy they are being so obviuosly all tht perfromance and modern architecture is going to waste when you port from an older system in must of the cases, and we expect the console to do miracles witth just 160 stream cores when not even xbox one and ps4 can do 1080p 60fps from ports of the older generation eventhough being more modern, efficient and also exceeding the raw power needed for that?, of course not. And its not just the problem that comes when porting or that developers are lazy, but also the fact that the development tools and engines were in their baby steps when the first wii u ports came out, and still the games worked fine, so summing all that up and the examples of xbox one and ps4 ports and the ps3 worst version of bayonetta due to being a port also point this out

 

its not only bayo 1 for ps3 being bad for being a port comapred to 360 and having half the framerate, its not just ps4 assesins creed 4 version which despite ps4 being more modern, efficent and easier to developr for cant run it at 1080p60fps and was running at just 900p30fps before the patch of 1080p30fps despite having all the adevnatge of modern hardware and being 750% more powerful than 360 in paper, we also have many other examples, ave you forgotten call of duty ghosts for xbox one? and xbox one is more modern, more effient and also 500% more powerul in paper, so why its running at just 720p?

 

 

I had a Radeon 6450 once which has 160 Stream processors, you would be surprised how well it runs console ports at 720P with low settings, even Assassins Creed 4. (Thankfully I replaced that POS with a Radeon 6570 in the Core 2 rig.)
But don't take my word for it...
http://www.youtube.com/watch?v=yFOrCJxnyDo

Remember the Next-Gen twins are running games graphically superior to the WiiU in every single way, they're in completely different leagues.
What you're suggesting is essentially that the difference between a WiiU and the fastest PC you can build has no performance delta because both are only running at 720P. (The Audacity!)
Since when has resolution been a defining factor around this forum in regards to hardware potency and graphics? I mean seriously? It's all but one minor part of a stupidly large puzzle that defines the graphical fidelity of a scene that's showcased on your screen.

megafenix said:

shouldnt the cache help on the framerate?, cause 360 has only 1MB and wii u has 3, so even if its slower, if the develoeprs take profit of the extra 2MB of cache there should be no framerate issues, plus one of the secret develpers at eurogamer admitted that they were being lazy, like saying that compute shaders were there but they didnt bother using them, sum that to the fact that 360 uses one core for sound, so obviously the port on wii u will do instead of using the DPS if the developers dont put effort into changing the code, not just adapting it, your geomety shaders also go to waste cause 360 doenst support them and thus the port will not do unless the deveopers change it, but they didnt bither with the compute shaders, why would they with the geometry shaders and other new stuff for better performance?

etc

No. Cache does NO form of processing, it just stores information, like a hard drive, just orders of magnitudes faster.

As for geometry shaders and compute shaders the Xbox 360 can deal with those, albeit it's extremely limited, it would probably translate well over to Latte as Latte is several evolutionary steps from the Xbox 360's GPU.
However, the other little tidbits of the hardware may have had an effect on things.


Anyhow, I know this is a long stretch, but do try to listen to reason instead of falling on your disproved ideas and numbers.

 

400 stream cores aint a problem, as i told you why dont you check the wii u gpu die size and compare it to the redwood xt? and dont forget to check out the die size that anadtech reported and the chipworks one so apply the same logic to the redwood and you will get it

cant you fit 94mm2 on 96mm2?

please check anadtech and chipworks report on wii u die size before responding, plus dont forget that redwood xt has also 20 tmus and not 16 as the wii u is being specualted, so all that silicon area can be used for whatever nintendo wanted to put on their fix function harwdare, and yea dont forget that redwood xt also has teselator and other things so are already accountedn in the die chip area

 

cache is more improtant than you think, they store data, dont you see it?

the memory export in 360 is used to move fetch data to the main ram cause edram is locked for framebuffer, with wiiu you can store that fetch data on edram and your performance will go up by a lot, of course in ports you wont see this caue the ports were done initially on 360 and moved to wii u and since 360 does the fetch data storage on main ram so will the wii u port do

 

already showed you how importnat fetch data is, please read before say something

here again, and its not me its nvidia who says this

https://developer.nvidia.com/content/vertex-texture-fetch

"

Vertex Texture Fetch

DownLoad

File Description Size
Vertex_Textures.pdf With Shader Model 3.0, GeForce 6 and GeForce 7 Series GPUs have taken a huge step towards providing common functionality for both vertex and pixel shaders. This paper focuses specifically on one Shader Model 3.0 feature: Vertex Texture Fetch (PDF). It allows vertex shaders to read data from textures, just like pixel shaders can. This additional feature is useful for a number of effects, including displacement mapping, fluid and water simulation, explosions, and more. The image below shows the visual impact of adding vertex textures, comparing an ocean without (left) and with (right) vertex textures.

"

plus, that image from chipworks doenst reveal half the gpu yet, so obviously there could be more stream cores enough to fit the 400 stream cores and there would be een space enough for the tesselator, rasterizer, and tother components



megafenix said:

 

400 stream cores aint a problem, as i told you why dont you check the wii u gpu die size and compare it to the redwood xt? and dont forget to check out the die size that anadtech reported and the chipworks one so apply the same logic to the redwood and you will get it

cant you fit 94mm2 on 96mm2?

please check anadtech and chipworks report on wii u die size before responding, plus dont forget that redwood xt has also 20 tmus and not 16 as the wii u is being specualted, so all that silicon area can be used for whatever nintendo wanted to put on their fix function harwdare, and yea dont forget that redwood xt also has teselator and other things so are already accountedn in the die chip area


You're only comparing die-sizes.

You can have varying amounts of transister density at the same fabrication node.
It's not as black and white as you religiously beleive.

As for the die-sizes themselves, you're omitting the fact that Anandtech didn't take into account the protective shell and they didn't single out the GPU portion of the chip.

megafenix said:

cache is more improtant than you think, they store data, dont you see it?

the memory export in 360 is used to move fetch data to the main ram cause edram is locked for framebuffer, with wiiu you can store that fetch data on edram and your performance will go up by a lot, of course in ports you wont see this caue the ports were done initially on 360 and moved to wii u and since 360 does the fetch data storage on main ram so will the wii u port do

 

I'm not denying that one bit.
But you make it sound as if the eDRAM solves all of the lack-of-compute problems known to man, it doesn't.
A faster *large* pool of GDDR5 memory will always be preffered over eDRAM/eSRAM.

Remember, it's not the eDRAM that does all the calculations that puts the pretty pictures on your screen, combined with physics and A.I.
I harken back to the road analogy.

Besides, that's what predictors are for if you don't have an abundance of fast and low latency memory access, they predict the data ahead of time to be processed so it's ready.
Actually if you look at Intel and AMD's processors, you will notice that a stupidly massive portion of the die size is built to hide memory latency and bandwidth and only very little is compute resources.

megafenix said:

 

already showed you how importnat fetch data is, please read before say something

here again, and its not me its nvidia who says this

https://developer.nvidia.com/content/vertex-texture-fetch

"

Vertex Texture Fetch

DownLoad

File Description Size
Vertex_Textures.pdf
With Shader Model 3.0, GeForce 6 and GeForce 7 Series GPUs have taken a huge step towards providing common functionality for both vertex and pixel shaders. This paper focuses specifically on one Shader Model 3.0 feature: Vertex Texture Fetch (PDF). It allows vertex shaders to read data from textures, just like pixel shaders can. This additional feature is useful for a number of effects, including displacement mapping, fluid and water simulation, explosions, and more. The image below shows the visual impact of adding vertex textures, comparing an ocean without (left) and with (right) vertex textures.

"

plus, that image from chipworks doenst reveal half the gpu yet, so obviously there could be more stream cores enough to fit the 400 stream cores and there would be een space enough for the tesselator, rasterizer, and tother components


 

 

This again? That implementation of Vertex fetch has zero relevenace.
GPU's have changed substantually since the Geforce 6 and 7. (We are at something like the Geforce 15 now.)
Basically every last generation, next generation and the PC have had it and used it at one point (Or variation there-of, with a possible exception for the Wii.)

This is going round in cicrcles, even when I provided you *proof* you fall back towards your circular thinking.


I still firmly beleive the WiiU is roughly 50% more powerfull than the HD twins, maybe even double, but it achieves that with efficiency and not 400 stream processors.



--::{PC Gaming Master Race}::--

Around the Network

If only we could discuss this matter without extremists from both sides turning it into a clusterfuck... (Though the two main offenders on the anti-Wii U side are banned currently)



Pemalite said:
megafenix said:

 

400 stream cores aint a problem, as i told you why dont you check the wii u gpu die size and compare it to the redwood xt? and dont forget to check out the die size that anadtech reported and the chipworks one so apply the same logic to the redwood and you will get it

cant you fit 94mm2 on 96mm2?

please check anadtech and chipworks report on wii u die size before responding, plus dont forget that redwood xt has also 20 tmus and not 16 as the wii u is being specualted, so all that silicon area can be used for whatever nintendo wanted to put on their fix function harwdare, and yea dont forget that redwood xt also has teselator and other things so are already accountedn in the die chip area


You're only comparing die-sizes.

You can have varying amounts of transister density at the same fabrication node.
It's not as black and white as you religiously beleive.

As for the die-sizes themselves, you're omitting the fact that Anandtech didn't take into account the protective shell and they didn't single out the GPU portion of the chip.

megafenix said:

cache is more improtant than you think, they store data, dont you see it?

the memory export in 360 is used to move fetch data to the main ram cause edram is locked for framebuffer, with wiiu you can store that fetch data on edram and your performance will go up by a lot, of course in ports you wont see this caue the ports were done initially on 360 and moved to wii u and since 360 does the fetch data storage on main ram so will the wii u port do

 

I'm not denying that one bit.
But you make it sound as if the eDRAM solves all of the lack-of-compute problems known to man, it doesn't.
A faster *large* pool of GDDR5 memory will always be preffered over eDRAM/eSRAM.

Remember, it's not the eDRAM that does all the calculations that puts the pretty pictures on your screen, combined with physics and A.I.
I harken back to the road analogy.

Besides, that's what predictors are for if you don't have an abundance of fast and low latency memory access, they predict the data ahead of time to be processed so it's ready.
Actually if you look at Intel and AMD's processors, you will notice that a stupidly massive portion of the die size is built to hide memory latency and bandwidth and only very little is compute resources.

megafenix said:

 

already showed you how importnat fetch data is, please read before say something

here again, and its not me its nvidia who says this

https://developer.nvidia.com/content/vertex-texture-fetch

"

Vertex Texture Fetch

DownLoad

File Description Size
Vertex_Textures.pdf
With Shader Model 3.0, GeForce 6 and GeForce 7 Series GPUs have taken a huge step towards providing common functionality for both vertex and pixel shaders. This paper focuses specifically on one Shader Model 3.0 feature: Vertex Texture Fetch (PDF). It allows vertex shaders to read data from textures, just like pixel shaders can. This additional feature is useful for a number of effects, including displacement mapping, fluid and water simulation, explosions, and more. The image below shows the visual impact of adding vertex textures, comparing an ocean without (left) and with (right) vertex textures.

"

plus, that image from chipworks doenst reveal half the gpu yet, so obviously there could be more stream cores enough to fit the 400 stream cores and there would be een space enough for the tesselator, rasterizer, and tother components


 

 

This again? That implementation of Vertex fetch has zero relevenace.
GPU's have changed substantually since the Geforce 6 and 7. (We are at something like the Geforce 15 now.)
Basically every last generation, next generation and the PC have had it and used it at one point (Or variation there-of, with a possible exception for the Wii.)

This is going round in cicrcles, even when I provided you *proof* you fall back towards your circular thinking.


I still firmly beleive the WiiU is roughly 50% more powerfull than the HD twins, maybe even double, but it achieves that with efficiency and not 400 stream processors.

Thanks for all the info you posted in this thread permalite. It was interesting to say the least.

The last rumour I read was that the WiiU GPU had 192 stream processors. How do you think that would work based on your knowledge of the VLIW architectures?



Scoobes said:

Thanks for all the info you posted in this thread permalite. It was interesting to say the least.

The last rumour I read was that the WiiU GPU had 192 stream processors. How do you think that would work based on your knowledge of the VLIW architectures?


192 Stream processors *could* be possible.
I.E. 8 logical blocks with 24 ALU's.

However as far as I know AMD never built a chip like that.
You need to keep in mind though that VLIW4 was born out of a necessity for more efficiency, VLIW4 debuted on the same fabrication process as the later VLIW5 (Radeon 5000 series) so they were fairly constrained in transister counts, AMD spent allot of time engineering VLIW4 for the best transister use/performance, if Nintendo did indeed deviate from that status quo that AMD built, it would be an interesting choice and could explain allot of things of why the design is fairly different from any current GPU.

When AMD was thinking about designing the VLIW set for it's GPU's it was still a world that was very much dominated by Direct X 9 games, over time as PC games started to evolve and become more technically rich, the 5th unit in the VLIW architecture often went un-used and un-used portions of a GPU is inefficient, so they dropped the 5th unit and combined it's functionality into the other units.

The result was you could have 20% more shader clusters with VLIW4 that was more heavily utilised for the same amount of transister space, which is why a Radeon 5870 despite having more shaders than the Radeon 6970, was actually slower, the 6970 had simply more smaller clusters that were being used more heavily.
It doesn't help that VLIW is stupidly reliant on real-time compilers to achieve full utilisation, it's easier for a compiler to manage VLIW4 than it is with VLIW5.

But that's just talk of the shader arrays, there is more to a GPU than that.

If you look back through history geometry performance only increase by a few multiples, whilst shader performance increased by 100's of times if not more, suddenly over the last few years there has been a massive focus on improving geometry performance, this is where the WiiU has a stupidly *massive* advantage over the Xbox 360 and Playstation 3, this generation should provide a much larger focus on that area of a games graphics then we have ever seen before.
This is an area that it would be hard to argue for the WiiU to have 100x or more performance advantage when compared against the Xbox 360 and Playstation 3. (I'm not kidding!)

I'm excited to see how games look in a couple years across every single platform in regards to geometry.



--::{PC Gaming Master Race}::--

Pemalite said:
Scoobes said:

Thanks for all the info you posted in this thread permalite. It was interesting to say the least.

The last rumour I read was that the WiiU GPU had 192 stream processors. How do you think that would work based on your knowledge of the VLIW architectures?


192 Stream processors *could* be possible.
I.E. 8 logical blocks with 24 ALU's.

However as far as I know AMD never built a chip like that.
You need to keep in mind though that VLIW4 was born out of a necessity for more efficiency, VLIW4 debuted on the same fabrication process as the later VLIW5 (Radeon 5000 series) so they were fairly constrained in transister counts, AMD spent allot of time engineering VLIW4 for the best transister use/performance, if Nintendo did indeed deviate from that status quo that AMD built, it would be an interesting choice and could explain allot of things of why the design is fairly different from any current GPU.

When AMD was thinking about designing the VLIW set for it's GPU's it was still a world that was very much dominated by Direct X 9 games, over time as PC games started to evolve and become more technically rich, the 5th unit in the VLIW architecture often went un-used and un-used portions of a GPU is inefficient, so they dropped the 5th unit and combined it's functionality into the other units.

The result was you could have 20% more shader clusters with VLIW4 that was more heavily utilised for the same amount of transister space, which is why a Radeon 5870 despite having more shaders than the Radeon 6970, was actually slower, the 6970 had simply more smaller clusters that were being used more heavily.
It doesn't help that VLIW is stupidly reliant on real-time compilers to achieve full utilisation, it's easier for a compiler to manage VLIW4 than it is with VLIW5.

But that's just talk of the shader arrays, there is more to a GPU than that.

If you look back through history geometry performance only increase by a few multiples, whilst shader performance increased by 100's of times if not more, suddenly over the last few years there has been a massive focus on improving geometry performance, this is where the WiiU has a stupidly *massive* advantage over the Xbox 360 and Playstation 3, this generation should provide a much larger focus on that area of a games graphics then we have ever seen before.
This is an area that it would be hard to argue for the WiiU to have 100x or more performance advantage when compared against the Xbox 360 and Playstation 3. (I'm not kidding!)

I'm excited to see how games look in a couple years across every single platform in regards to geometry.

this explains 100% why wiiu gpu is 160 shaders part.
Quote:
Originally Posted by Entropy 
What disturbs me about the "160 SPUs" hypothesis, is not only the glaring disparity between eDRAM and SPU density.

There is no glaring disparity between edram and SPU density. We've gone over this. It's not a TSMC fabbed chip and it's not laid out by AMD. Direct comparison of SPU density with AMD/TSMC 40nm chips are a very, very poor basis for comparison. (Register counts, however, will transfer across processes).

You went quiet for a few months after insulting people for questioning whether it was a TSMC chip (you claimed TSMC was a 'FACT!!') but it turns out you were wrong, and now seem to have forgotten that the basis for your 'SPU density' argument isn't there.

Quote:
It is also that every developer testimony I've seen has said that the GPU has performance margins over the 360, typically allowing them to implement some minor new feature(s).

Actually most of them didn't add new features. Some did, and at least one has said they could have if they had more time. And some games reduced GPU load in specific areas.

But this is only meaningful if you want to claim that the Wii U couldn't improve anything without having more 'raw ALU' than the 360. And that's fundamentally wrong, given what we have seen happen in the PC space from which these GPUs are derived.

Quote:
The 360 has 240 (VLIW5) SPUs, and the hypothesis posits the WiiU to have 160 (VLIW5) SPUs of a generationally very close architecture. The clock frequency is similar, so if all other things were equal, the 360 would have 50% higher raw ALU capabilities.

Wrong again! And again it's something that's come up in this thread a number of times. The 360 isn't VLIW5, it's an entire generation (or more) behind, and 'raw ALU' is only 23% higher on the Xbox 360 and nothing like 50%. And god knows how much more efficient VLIW5 will be. In things like triangle setup and raw fillrate the Wii U will be outright faster.

Not to mention that the Wii U won't have the same tiling penalties and that it can read from edram without a copy out / resolve to main ram (and even read from and write to the same buffer).

The 360 doesn't even have early z rejection!


Quote:
But this clashes with the sentiments of developers on record.

Not at all, especially as the 'this' (above) is basically wrong.

Quote:
You could assume that Nintendo has commissioned modifications to the ALU-blocks that make them a whole lot more performant ...

There's absolutely no need to make that assumption.

And a 320 shader part would be able to trounce the 360, running 360 games easily at much higher resolutions just like it does on the PC. Seeing minor improvements is not justification for vastly more powerful hardware.



 this fanboy talk needs to stop nintendo had made weak hardware on par or slighty better then current gen, multiplatform games prove this and developer comments also back this up, along with neogaf confirmed specs, that are backed my a mod. here are some developer comments by the way

http://www.gamesindustry.biz/articles/2012-04-02-wii-u-less-powerful-than-ps3-xbox-360-developers-say

"No, it's not up to the same level as the PS3 or the 360," said one developer who's been working with the Wii U. What does that mean? "The graphics are just not as powerful," reiterated the source.
Anonymous developer



Funny thing is neogaf confirmed that wiiu actually has less shaders



"This developer is not alone in their opinion. Another developer at a major company confirmed this point of view. "Yeah, that's true. It doesn't produce graphics as well as the PS3 or the 360," said the source. "There aren't as many shaders, it's not as capable. Sure, some things are better, mostly as a result of it being a more modern design. But overall the Wii U just can't quite keep up."

http://www.eurogamer.net/articles/2012-03-26-darksiders-2-dev-wii-u-hardware-on-par-with-current-gen

"So far the hardware's been on par with what we have with the current generations. Based on what I understand, the resolution and textures and polycounts and all that stuff, we're not going to being doing anything to up-rez the game, but we'll take advantage of the controller for sure."



http://www.nintendolife.com/news/2013/09/developer_interview_black_forest_games_on_bringing_giana_sisters_twisted_dreams_to_wii_u

""Overall the hardware itself is slightly faster than the other current-gen consoles.

"Then we started working on the port, we already had our game released on the X360, and going from a console release to Wii U is easier since the consoles have very similar performance characteristics"

http://mynintendonews.com/2013/05/17/ea-senior-engineer-the-wii-u-is-crap-less-powerful-than-an-xbox-360/

EA SENIOR ENGINEER: ‘THE WII U IS CRAP, LESS POWERFUL THAN AN XBOX 360′