By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - Wii U GPU new info and more speculation

Looks like more and more evidence is mounting up eliminating the 4000 series as a candidate in the foray.



Around the Network
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 



errorpwns said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

No offence but even with the video set to 720p i really dont see a difference between the wiiu and ps3/360 versions.



Tachikoma said:
errorpwns said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

No offence but even with the video set to 720p i really dont see a difference between the wiiu and ps3/360 versions.


There's a huge difference. For starters look at the lighting, draw distance, and textures. 



errorpwns said:
Tachikoma said:
errorpwns said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

No offence but even with the video set to 720p i really dont see a difference between the wiiu and ps3/360 versions.


There's a huge difference. For starters look at the lighting, draw distance, and textures. 

Lighting - Doesn't look any better, or any worse, you're just selectively comparing different areas of the game where lighting and weather conditions are different.

draw distance - actual draw distance is the same, ps3 version fades in assets slightly slower, 360 fades them in more or less the same as the wiiu, only difference is wiiu's fade is more aggressive and applies lighting to the asset later, resulting in some foliage/trees appearing lighter for a few moments before the correct lighting is applied in the distance, that's neither a benefit or a negative in each case, again however you're most likely comparing to footage from another console where weather/time is different. - you would need a direct comparison to tell properly.

textures - are you even serious?, even set to 720p youtube still has significant media artifacting the degredation of image, don't be rediculous.

[EDIT] - Actually sorry, i take that back, just read the DF review and they are infact using the PC higher resolution textures, but everything else is exactly the same.



Around the Network
Tachikoma said:
errorpwns said:
Tachikoma said:
errorpwns said:
ninjablade said:

more realistic scenerio is from beyond3D

With all the peeping at die shots (which has been tremendous fun) I think we might have gotten tunnel vision and be losing the "big picture". The question of "320 vs 160" shaders is still unanswered and stepping back should help us answer it.

The current popular hypothesis that Latte is a 16:320:8 part @ 550 mHz. Fortunately, we can see how such a part runs games on the PC. You know, the PC, that inefficient beast that's held back by Windows, thick APIs, Direct X draw-call bottlencks that break the back of even fast CPUs, and all that stuff. Here is a HD 5550, a VLIW5 GPU with a 16:320:8 configuration running at @550 mhz:

http://www.techpowerup.com/reviews/H...HD_5550/7.html

And it blows past the 360 without any problems. It's not even close. And that's despite being on the PC!

Now lets scale things back a bit. This is the Llano A3500M w/ Radeon 6620G - a 20:400:8 configuration GPU, but it runs @ 444 MHz meaning it has exactly the same number of gflops and TMU ops as the HD 5550, only it's got about 20% lower triangle setup and fillrate *and* it's crippled by a 128 bit DDR 1333 memory pool *and* it's linked to a slower CPU than the above benchmark (so more likely to suffer from Windows/DX bottlenecks). No super fast pool of edram for this poor boy!

http://www.anandtech.com/show/4444/a...pu-a8-3500m/11
http://www.anandtech.com/show/4444/a...pu-a8-3500m/12

And it *still* comfortably exceeds the 360 in terms of the performance that it delivers. Now lets look again at the Wii U. Does it blow past the 360? Does it even comfortably exceed the 360? No, it:

keeps
losing
marginally
to
the
Xbox
360

... and that's despite it *not* being born into the performance wheelchair that is the Windows PC ecosystem. Even if the Wii U can crawl past the 360 - marginally - in a game like Trine 2 it's still far below what we'd expect from a HD5550 or even the slower and BW crippled 6620G. So why is this?

It appears that there two options. Either Latte is horrendously crippled by something (API? memory? documentation? "drivers"?) to the point that even equivalent or less-than equivalent PC part can bounce its ass around the field, or ... it's not actually a 16:320:8 part.

TL: DR version:
Latte seems to be either:
1) a horrendously crippled part compared to equivalent (or lower) PC GPUs, or
2) actually a rather efficient 160 shader part

Aaaaaaand I'll go with the latte(r) as the most likely option. Face it dawgs, the word on the street just don't jive with the scenes on the screens


.I agree that there is probably something missing. The 320 number don't seem to match up with anything. The layout of the SIMD looks like its the same as for 20 ALUs with the same number of cache blocks. The only thing explaining 320 SPs is the supposed 40nm process and the block being slightly too big. Even that doesn't explain it fully.

The SIMD blocks are 60% the size of Llano's and only about 30% larger than bobcat's 20 SPs. Even on 40nm, its pretty absurd that the density increased so much. We also don't have conclusive evidence it is 40nm. The only thing the pins 40nm right now seems to be the eDRAM size. Which is a really rough estimate from what I can tell.

There is too much unconfirmed things. I don't even know how everyone jumped onto the 320 SPs ship so fast. So far the similarities of the SIMD blocks compared with bobcat should point at 20 shaders per block at a larger manufacturing process. Thats what you'd get if you only looked at the SIMD blocks.

I find its much more likely they found a way to pack eDRAM slightly denser than to somehow pack the ALU logic smaller and cut away half the cache blocks. Or maybe the whole chip is 40nm but the logic isn't packed very dense because it is not originally designed for that process and fab. This is all much more likely from my point of view than magically have 320 SPs in so little space.


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

No offence but even with the video set to 720p i really dont see a difference between the wiiu and ps3/360 versions.


There's a huge difference. For starters look at the lighting, draw distance, and textures. 

Lighting - Doesn't look any better, or any worse, you're just selectively comparing different areas of the game where lighting and weather conditions are different.

draw distance - actual draw distance is the same, ps3 version fades in assets slightly slower, 360 fades them in more or less the same as the wiiu, only difference is wiiu's fade is more aggressive and applies lighting to the asset later, resulting in some foliage/trees appearing lighter for a few moments before the correct lighting is applied in the distance, that's neither a benefit or a negative in each case, again however you're most likely comparing to footage from another console where weather/time is different. - you would need a direct comparison to tell properly.

textures - are you even serious?, even set to 720p youtube still has significant media artifacting the degredation of image, don't be rediculous.

[EDIT] - Actually sorry, i take that back, just read the DF review and they are infact using the PC higher resolution textures, but everything else is exactly the same.


Well, if you think better textures, lightning and draw distance don't mean a thing, i don't know what kind of graphics you're expecting from PS4 and Nextbox, but i'm sure you'll be disappointed. And, even if you don't see improvements in draw distance and lightning, if the developers said they did improve those areas, i'm inclined to believe them.

 

However, i can't believe improving the graphics would be possible with a 160SPs GPU like the Radeon HD 6450. I mean, as we can see on this video, it barely runs the game on low settings and with lower than Wii U resolution:

 

http://www.youtube.com/watch?v=FSfdep-rPk0

 

Wii U version looks like it's running on high, at least texture-wise. No amount of optimization can give these kinds of results, even if the DirectX API's overhead damages performances on PC. I'm inclined to believe we're indeed talking about a 320SPs GPU on Wii U, that, or the measured clock rate of Wii U's GPU is wrong.



errorpwns said:
 


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

here let me embed for you:



RazorDragon said:
Tachikoma said:

Lighting - Doesn't look any better, or any worse, you're just selectively comparing different areas of the game where lighting and weather conditions are different.

draw distance - actual draw distance is the same, ps3 version fades in assets slightly slower, 360 fades them in more or less the same as the wiiu, only difference is wiiu's fade is more aggressive and applies lighting to the asset later, resulting in some foliage/trees appearing lighter for a few moments before the correct lighting is applied in the distance, that's neither a benefit or a negative in each case, again however you're most likely comparing to footage from another console where weather/time is different. - you would need a direct comparison to tell properly.

textures - are you even serious?, even set to 720p youtube still has significant media artifacting the degredation of image, don't be rediculous.

[EDIT] - Actually sorry, i take that back, just read the DF review and they are infact using the PC higher resolution textures, but everything else is exactly the same.


Well, if you think better textures, lightning and draw distance don't mean a thing, i don't know what kind of graphics you're expecting from PS4 and Nextbox, but i'm sure you'll be disappointed. And, even if you don't see improvements in draw distance and lightning, if the developers said they did improve those areas, i'm inclined to believe them.

 

However, i can't believe improving the graphics would be possible with a 160SPs GPU like the Radeon HD 6450. I mean, as we can see on this video, it barely runs the game on low settings and with lower than Wii U resolution:

 

http://www.youtube.com/watch?v=FSfdep-rPk0

 

Wii U version looks like it's running on high, at least texture-wise. No amount of optimization can give these kinds of results, even if the DirectX API's overhead damages performances on PC. I'm inclined to believe we're indeed talking about a 320SPs GPU on Wii U, that, or the measured clock rate of Wii U's GPU is wrong.

I definitely saw a difference in lighting (especially the reflection effect on the road), having played the game on my friend's 360. The reflections in the other versions seem overly shiny to me, while the reflections on the PC/WiiU versions appear more realistic side by side (this may be due to the WiiU/PC using shader model 5/DX11). The draw distance appears a bit better to me as well, and there is a noticable difference in texture resolution (mostly seen in the close up shots of the road & buildings after a crash on the Youtube vid) I went over and watched a few city races on the 360 for comparison, the surfaces of the buildings appear much flatter & textures appear a bit blurry until you're right up on them, and the overall look of the game seems a bit fuzzy (maybe the 360 version isn't native 720p?).

Is it possible that one issue is that other ports originated from the 360, while this one originates from the PC version? Maybe the benifit we see here is due to using more modern instructions & features specific to Shader Model 5 and DirectX 11 + assets designed for more modern hardware? It's also possible that the early WiiU ports didn't utilize some built in optomization features designed to overcome memory bandwidth, such as hardware texture compression & EDRAM management for example? I also wonder if the early SDK was missing some features initially, or if the porting teams were not able to study & utilize the optomization features properly due to time constraints.

EDIT: Looking at the Youtube vid of the 6450, there's no way that's in the U, as the quality of the NFS video we saw wouldn't be even remotely possible at that framerate (the 6450 is running at low detail and is barely playable). Since 30% of the die is unknown and nobody can even say for certain what part it's based on, there's definitely something going on beyond what meets the eye.



timmah said:
RazorDragon said:
Tachikoma said:

Lighting - Doesn't look any better, or any worse, you're just selectively comparing different areas of the game where lighting and weather conditions are different.

draw distance - actual draw distance is the same, ps3 version fades in assets slightly slower, 360 fades them in more or less the same as the wiiu, only difference is wiiu's fade is more aggressive and applies lighting to the asset later, resulting in some foliage/trees appearing lighter for a few moments before the correct lighting is applied in the distance, that's neither a benefit or a negative in each case, again however you're most likely comparing to footage from another console where weather/time is different. - you would need a direct comparison to tell properly.

textures - are you even serious?, even set to 720p youtube still has significant media artifacting the degredation of image, don't be rediculous.

[EDIT] - Actually sorry, i take that back, just read the DF review and they are infact using the PC higher resolution textures, but everything else is exactly the same.


Well, if you think better textures, lightning and draw distance don't mean a thing, i don't know what kind of graphics you're expecting from PS4 and Nextbox, but i'm sure you'll be disappointed. And, even if you don't see improvements in draw distance and lightning, if the developers said they did improve those areas, i'm inclined to believe them.

 

However, i can't believe improving the graphics would be possible with a 160SPs GPU like the Radeon HD 6450. I mean, as we can see on this video, it barely runs the game on low settings and with lower than Wii U resolution:

 

http://www.youtube.com/watch?v=FSfdep-rPk0

 

Wii U version looks like it's running on high, at least texture-wise. No amount of optimization can give these kinds of results, even if the DirectX API's overhead damages performances on PC. I'm inclined to believe we're indeed talking about a 320SPs GPU on Wii U, that, or the measured clock rate of Wii U's GPU is wrong.

I definitely saw a difference in lighting (especially the reflection effect on the road), having played the game on my friend's 360. The reflections in the other versions seem overly shiny to me, while the reflections on the PC/WiiU versions appear more realistic side by side (this may be due to the WiiU/PC using shader model 5/DX11). The draw distance appears a bit better to me as well, and there is a noticable difference in texture resolution (mostly seen in the close up shots of the road & buildings after a crash on the Youtube vid) I went over and watched a few city races on the 360 for comparison, the surfaces of the buildings appear much flatter & textures appear a bit blurry until you're right up on them, and the overall look of the game seems a bit fuzzy (maybe the 360 version isn't native 720p?).

Is it possible that one issue is that other ports originated from the 360, while this one originates from the PC version? Maybe the benifit we see here is due to using more modern instructions & features specific to Shader Model 5 and DirectX 11 + assets designed for more modern hardware? It's also possible that the early WiiU ports didn't utilize some built in optomization features designed to overcome memory bandwidth, such as hardware texture compression & EDRAM management for example? I also wonder if the early SDK was missing some features initially, or if the porting teams were not able to study & utilize the optomization features properly due to time constraints.

EDIT: Looking at the Youtube vid of the 6450, there's no way that's in the U, as the quality of the NFS video we saw wouldn't be even remotely possible at that framerate (the 6450 is running at low detail and is barely playable). Since 30% of the die is unknown and nobody can even say for certain what part it's based on, there's definitely something going on beyond what meets the eye.


Lightning wise, yes, it's probably because of newer instructions like Shader Model 5 and APIs with DX11-like features. However, Shader Model or DirectX version differences wouldn't explain better textures and draw distance. For a GPU to do better textures and geometry(draw distance is included in that) than other, it needs to be more powerful, as Shader Model version or DX version only affect shader operations. For example, a Radeon HD 4890(DirectX 10.1, Shader Model 4.1) can drive more polygons and run better textures at a higher framerate than a HD 5770, which is a generation ahead of it and has a newer APIs such as Shader Model 5.0 and DX11, even if the effects it can produce are limited compared to a HD 5770 by the Shader Model and DX version.

Like i said in another post in this thread, ignoring this 30% part of the die that's still unknown, by size measurements it would mean that Wii U's GPU would probably be something similar to a HD 6450. However, based on this Most Wanted Wii U video, I can't believe this card would be able to do these kinds of visuals no matter which optimizations they made in the game.



dahuman said:
errorpwns said:
 


It loses marginally to the 360?

 

http://www.youtube.com/watch?feature=player_embedded&v=0eH3DmUgokk

 

Really now? 

here let me embed for you:

NFS Most Wanted running MAX settings on HD5500, does this look similar to the above video (minus the annoying blur effect)? Why yes, it does.


No way it's a 160SP 6450 (or similar), which can't even run the game at low settings:

It could also be in the neighborhood of a downscaled HD 6570 and achieve similar results.

Like I've said before, the 160SP theory doesn't match up with the visual output we've seen from the system (IMO).