By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - Wii U GPU new info and more speculation

ninjablade said:
HoloDust said:
RazorDragon said:

First the Wii U could handle 480SPs based on the die size alone, that's counting the GPU along with the eDRAM. Now it can't handle 320SP in the same space that it was thought it would pack 480SPs, and could only handle 160 SPs? That's not possible, only if that GPU was made using the 90nm process, you know, the same one used in 2006 to make the Wii. So, no, no way it's less than 320 SPs.

I think 480SPs was always a bit of stretch (but hoped for), after the first measurements...

480:24:8 @40nm GPU is 118mm^2

400:20:8 @ 40nm GPU is 104mm^2

I'm still thinking it's 320SPs part, that is hindered by memory architecture. For comparision, 5550 (320:16:8 part), same clock (550MHz), different memory bandwith:

5550 GDDR5/128bit (51.2GB/s) - 27VP

5550 DDR3/128bit (25.6GB/s) - 21.8VP

5550 DDR2/128bit (12.8GB/s, same as suggested WiiU's) - 16.5VP

X360 - something around 13-14VP

Rumoured PS4 GPU - around 140VP

If Nintendo designed in such low level bottlenecks in CPU and RAM performance, would it make any sense to chuck in a GPU that will be constantly throttled? A 320 shader part would be more GPU than the platform can support, it seems, which means a 320 shader part would be wasted silicon and wasted money, and that's really not in keeping with Nintendo's philosophy! The logic behind proportionally massive GPU power isn't there, because things like GPGPU need data to work on and high throughput. Ergo, if a 320 shader part can be proven far more capable than 360 as function does, that is pretty conclusive for me. Nintendo wouldn't not put in a part that capable and then completely gimp it unless their engineers are incompetant.

from beyond3d mod

Well, not quite certain about Ninty design logic, or how much and if it can negate suggested slow memory, but here's for another comparison - Llano APU with HD6530D which is also 320:16:8 part running at 443MHz with mem bandwith of 29.9GB/s is rated at 18VP.

BTW, looking at 4770@40nm and 4670@55nm, I'm kinda certain they can put 320:16:8 into something around 80-90mm^2 @40nm, so I think there's plenty of room there for 320SPs.



Around the Network
hivycox said:
VGKing said:
hivycox said:
Chiefpitchanono said:

Dont know enough about specs and gpus to really get in this debate. But as a consumer i could certainly see the difference between wii and xbox and ps3 games. I can certainly see that the wii u games look amazing and obviously much better than wii. I cant wait to see games on the next playstation and xbox, but from what i understand from pc guys who have already been useing equipment that is similar or better than what is supposed to be in the new machines - they say the differences are not going to be that extreme as it was last generation. For a common consumer if i have to look at the same game at the same time side by side just to be able to tell the difference then i think nintendo is gonna be just fine. I mean if this is the case the other guys could actually be in trouble once the techies have made there purchases because when it comes down to the common masses to boost sales if they cant see an obvious reason to spend more money they are gonna go with the cheapest and thats allways nintendo.


Yeah you are right about it.... tech people do compare console specs with pc specs and thats the problem ...consoles are much more capable with an ordinary GPU than a PC with the same GPU would be...

But other than that it is true that next generation of gaming will not be defined by graphics...its hard for SOME people around here to realize that but its fact...   At any other generation jump you could easily see the graphical difference but thats not the case this time...  So in a way we can tell that Nintendo is fine because they have innovative brains to make games for the wii u which will be fantastic gameplay wise as well as graphic wise...I'm of course referring to Nintendos first and second party games...these games can easily beat any other exclusive...

Its Sony and Microsoft who should show more innovation..furthermore they shouldn't release a console above 400$ because as you already said : the masses will always buy the cheapest one which would be the Wii U...AND these people won't see any difference between the wii u and the other consoles...

Nintendo is fine...Sony and Microsoft...not so

1. Wii U's problems are about more than just its power.
2. PS4/720 are designed to be in it for the long haul. These consoles will peak in their 5th or 6th years. This is only possible due to the amazing 3rd party support these consoels will receive.
3. @bold......I think you severely underestimate people. Just wait until the 20th. I'll have some amazing gifs for you.
4. Nintendo is NOT fine. Sony and Microsoft are in much better positions going into the next generation.


Wow so much fail in you post...

1. Wii U doesn't have any problems whatsoever....fanboys just nitpick ...there is no problem which can't be solved due to a software update...and the wii u will get two major ones the next months...

2. you really think that these consoles will live that long?...wow so let me get this stright: These "consoles" won't make profit because they will be too expensive due to their "future" proof hardware...so that makes sense...its the PS3 all over again...and this time its expected xD LMAO
Don'T you think that Sony learnt after the failure that was the PS3 launch ??? Do you really think they would be stupid enough to pull that again?...Well if that the case and Idon't see why not due to the rumoured specs...then its time so say goodbye to Sony...And Microsoft?? Well they will get their asses handed by the Steambox...thats for sure

3. No, you don't understand people...first: people won't see the tiny differences in graphics like you think what would happen... and second: see above = these "amazing gifts" will eiter be NOT so amazing or just flat out wrong because of the reasons I stated above...so pick your poison!

4. Well Sony just reported their next financial losses...Need I say more ??? I stopped counting their years without profit because its that many...

Here is an interesting video from ReviewTechUSA about this topic: http://www.youtube.com/watch?v=r2jIrK8cWqA

 

Buttom line = if Sony listen to some of your argument (not yours alone) than....well .... I think you know whats happening then

1. It does have problems, but I was talking more of Nintendo's choices and lack of 3rd party relatinships than technical stuff. But now that you bring it up, that slow OS is pretty bad. Should never had been that slow to begin with.

2. Of course they will lmfao. Minimum 5 year life cycle. I dont' want another 7-8 year generation though. That's just too much. Steamboxes aren't a threat, they are basically closed PCs. They will lack what makes consoles appealing which is price and exclusive software.

3. Differences wont' be tiny lmfao. 

4. I dont' care about Sony as a whole. We already know their bleeding money. We also know they made a huge mistake with PS3 and over-engineered it. PS4 will be a lot more developer friendly and it wont' be $599. 

February 20th can't come soon enough. Maybe that will finally shut you up. 



ninjablade said:
RazorDragon said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


First the Wii U could handle 480SPs based on the die size alone, that's counting the GPU along with the eDRAM. Now it can't handle 320SP in the same space that it was thought it would pack 480SPs, and could only handle 160 SPs? That's not possible, only if that GPU was made using the 90nm process, you know, the same one used in 2006 to make the Wii. So, no, no way it's less than 320 SPs.

Why talk about something, when you don't understand it,  what you stated goes against evrything i read from tech head's at neogaf and beyond3d. at the moment the block that hold the sp's are not even big enough to hold 40sp which is why everybody is confused.

 

 

As HoloDust said:

480:24:8 @40nm GPU is 118mm^2

400:20:8 @ 40nm GPU is 104mm^2

 

And TSMC's 40nm eDRAM should give 37mm^2 (got that from http://forum.beyond3d.com/showthread.php?p=1680372 )

So that would give us a final die size of 155mm^2, which is a little smaller than the 156.21mm^2 die size of Wii U's GPU. Introducing design overheads/redundancies for I/O and also increasing yield, it would get us in the 156.21mm^2 range. Of course, that was all speculation before the photo from Chipworks came, but it was all thought by NeoGAF and Beyond3D. I guess you didn't read those topics enough, because what i said was in the expectations of NeoGaf and Beyond3D users.



RazorDragon said:
ninjablade said:
RazorDragon said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


First the Wii U could handle 480SPs based on the die size alone, that's counting the GPU along with the eDRAM. Now it can't handle 320SP in the same space that it was thought it would pack 480SPs, and could only handle 160 SPs? That's not possible, only if that GPU was made using the 90nm process, you know, the same one used in 2006 to make the Wii. So, no, no way it's less than 320 SPs.

Why talk about something, when you don't understand it,  what you stated goes against evrything i read from tech head's at neogaf and beyond3d. at the moment the block that hold the sp's are not even big enough to hold 40sp which is why everybody is confused.

 

 

As HoloDust said:

480:24:8 @40nm GPU is 118mm^2

400:20:8 @ 40nm GPU is 104mm^2

 

And TSMC's 40nm eDRAM should give 37mm^2 (got that from http://forum.beyond3d.com/showthread.php?p=1680372 )

So that would give us a final die size of 155mm^2, which is a little smaller than the 156.21mm^2 die size of Wii U's GPU. Introducing design overheads/redundancies for I/O and also increasing yield, it would get us in the 156.21mm^2 range. Of course, that was all speculation before the photo from Chipworks came, but it was all thought by NeoGAF and Beyond3D. I guess you didn't read those topics enough, because what i said was in the expectations of NeoGaf and Beyond3D users.

http://forum.beyond3d.com/showpost.php?p=1703471&postcount=4532

http://forum.beyond3d.com/showpost.php?p=1703534&postcount=4533

Read and understand. the link you provided was 3 months ago, the descussion has since progressed since we have GPU pics.



ethomaz said:
There is no magical or miracle... the GPU is weak... just accept that.

Do you have an alternate theory as to what that 30% of the chip is doing?

No? Then don't try to make out as though you know more than you do. And don't bother going "Hollywood", because at most, that would take up maybe 10% of the chip, and you'll probably find that part of the known 70% includes some of it already.



Around the Network
happydolphin said:
ethomaz said:
There is no magical or miracle... the GPU is weak... just accept that.

Can you at least address the points in OP. There is apparently "still 30% of the Wii GPU die unknown and open for speculation."

Could you, rather than just say that? I understand people need to come to grips with reality but don't be too eager either, OP raises a question, don't pretend it isn't there either.

I alredy did that in this forum... so this question in OP is redundant.



I find it interesting that Wii U is literally half the size of both ps3 slim and xbox 360 slim, yet also packs comparable processing power, uses less energy, and runs infinately more quietly. It also runs last gen AAA games with gamepad support AT LAUNCH. Just think in 2 years when the hardware is fully utilized what kind of power we will get from this machine



Aielyn said:
ethomaz said:
There is no magical or miracle... the GPU is weak... just accept that.

Do you have an alternate theory as to what that 30% of the chip is doing?

No? Then don't try to make out as though you know more than you do. And don't bother going "Hollywood", because at most, that would take up maybe 10% of the chip, and you'll probably find that part of the known 70% includes some of it already.

You understand anything about GPU arch? No.  Then be quite.

The Wii U's GPU is not that customized like some guys here says... the Wii U's GPU have a space for eDRAM, other for the GPU ifself, and another unknown. The unknown part is not GPU shaders, it's anyting else like some fixed function or even some part to Wii's BC... because the Wii U needs that for some purpose.

The Wii U's GPU part is knewn already and it have at maximum 320SPs... the other 30% part can free up or help the GPU/CPU with the workload but that part don't add raw power to the GPU... you guy thinks there are "special sauce", "magical" or "unicorn" below Wii U's GPU... but the really is another.

Wii U GPU is weak... about 50% better than PS360 GPU... just face it.

PS. I don't need to enter in the fact that some GPU's needs black space (or unused) due production design.

Edit -  I want to apologize @Aielyn because I said some strong and poorly educated words... sorry.



Jankelope said:

I find it interesting that Wii U is literally half the size of both ps3 slim and xbox 360 slim, yet also packs comparable processing power, uses less energy, and runs infinately more quietly. It also runs last gen AAA games with gamepad support AT LAUNCH. Just think in 2 years when the hardware is fully utilized what kind of power we will get from this machine

Wii is new... PS360 is a 2006 project.... I can't believe you think that interesting.



Jankelope said:

I find it interesting that Wii U is literally half the size of both ps3 slim and xbox 360 slim, yet also packs comparable processing power, uses less energy, and runs infinately more quietly. It also runs last gen AAA games with gamepad support AT LAUNCH. Just think in 2 years when the hardware is fully utilized what kind of power we will get from this machine


Will still get easily smoked by the PS4/720.

Nintendo put the emphasis on power efficiency, that's probably what they ended up spending most of their R&D on, precisely what you said -- getting about PS3/360 level power + the ability to run a low res second screen into a 33 watt power range and a small casing. That's what their emphasis was on.

Also I don't see the box ever being pushed that hard. There aren't many games that maxed out the existing Wii, lol, Nintendo ditched a lot of developers like Rare and Factor 5 that would push their hardware to the max in the past.

Third parties don't give a crap.