By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Shin'en: If you can't make great looking games on Wii U, it's not the hardware's fault

fatslob-:O said:

@Bold you realize that GCN has to abreviations such a gamecube nintendo or graphics core next which is the newest gpu architecture from amd. 

"you're still making it sound like Wii U's GPGPU capabilities aren't too useful, why? "

You already know the reason, say it with me, VLIW for gpu compute is inefficient :P (BTW VLIW is practically bad in almost all cases for compute.)

"Anyway, even though floating point is supposed to be Espresso's weak points, if used correctly it's not so bad at all. I recall  that a NeoGaf member tested several CPUs in floating point / SIMD, and a PPC750CL (Broadway) didn't do too bad at all compared to other CPUs. (if you havne't seen it before, I'll link you to the post, and if I could find it.) And if Espresso is 3 Broadways with higher clocks and more cache, then it's not to hard to estimate how capable Espresso should be."

Woah there bro, I wasn't talking about the ibm espresso in my last post, It's fine for what it does but don't just hope for mind blowing ingame physics.

We're talking about AMD GPU architecture right? How the hell did you get Gamecube out of that? I'm talking about Graphics Core Next. If I ever refer to the Gamecube when speaking of AMD architecture (no idea why I would do that) I'll use the actual term "Gamecube"

*sigh* Dude, I've known that VLIW 4/5 are inefficent compared to GCN since the announcement of GCN, but you're still making it seem like it had might as well be ignored. If it's documented on the Wii U SDK, and if it works how they want, then they'll use it. If it's too much work to even bother, then they won't. 

Well, I did say that the CPU floating point SIMD wasn't its strongest points, and that GPGPU could allow some leeway, and then you said that we shouldn't expect anything amazing from the CPU because GPGPU won't help with some things that are best done on the CPU (which is what I agreed with). Well, that post in particular was a response to your comment about the CPU not being able to do much with floating point/SIMD type code. Compared to what we will see from PS4/X1 CPUs in terms of core-for-core comparison (since they're Wii U's competition), it's not far off. 



Around the Network
forethought14 said:
fatslob-:O said:

@Bold you realize that GCN has to abreviations such a gamecube nintendo or graphics core next which is the newest gpu architecture from amd. 

"you're still making it sound like Wii U's GPGPU capabilities aren't too useful, why? "

You already know the reason, say it with me, VLIW for gpu compute is inefficient :P (BTW VLIW is practically bad in almost all cases for compute.)

"Anyway, even though floating point is supposed to be Espresso's weak points, if used correctly it's not so bad at all. I recall  that a NeoGaf member tested several CPUs in floating point / SIMD, and a PPC750CL (Broadway) didn't do too bad at all compared to other CPUs. (if you havne't seen it before, I'll link you to the post, and if I could find it.) And if Espresso is 3 Broadways with higher clocks and more cache, then it's not to hard to estimate how capable Espresso should be."

Woah there bro, I wasn't talking about the ibm espresso in my last post, It's fine for what it does but don't just hope for mind blowing ingame physics.

We're talking about AMD GPU architecture right? How the hell did you get Gamecube out of that? I'm talking about Graphics Core Next. If I ever refer to the Gamecube when speaking of AMD architecture (no idea why I would do that) I'll use the actual term "Gamecube"

*sigh* Dude, I've known that VLIW 4/5 are inefficent compared to GCN since the announcement of GCN, but you're still making it seem like it had might as well be ignored. If it's documented on the Wii U SDK, and if it works how they want, then they'll use it. If it's too much work to even bother, then they won't. 

Well, I did say that the CPU floating point SIMD wasn't its strongest points, and that GPGPU could allow some leeway, and then you said that we shouldn't expect anything amazing from the CPU because GPGPU won't help with some things that are best done on the CPU (which is what I agreed with). Well, that post in particular was a response to your comment about the CPU not being able to do much with floating point/SIMD type code. Compared to what we will see from PS4/X1 CPUs in terms of core-for-core comparison (since they're Wii U's competition), it's not far off. 

 
@Bold I got it from this "And why are you comparing Wii U to GCN at all?"

Next time refer the wii u as VLIW.

I don't know about you bro but a 64 bit SIMD engine is pathetic the PS2 had a 128 bit SIMD engine so why not the wii u ?

BTW the PS4 and X1 probably features a 256 bit SIMD engine

Now that I look at the cpu more and more it's just modified ibm broadway or gekko only this time it has 3 cores with higher clock speeds and maybe some new instruction sets. That same ibm broadway processor also features the same 64 bit SIMD engine as the ibm gekko. Shame on nintendo for not moving on to a more advanced cpu architecure but instead they keep it practically the same as gamecube to get teh backwards compatibility.



fatslob-:O said:

 
@Bold I got it from this "And why are you comparing Wii U to GCN at all?"

Next time refer the wii u as VLIW.

I don't know about you bro but a 64 bit SIMD engine is pathetic the PS2 had a 128 bit SIMD engine so why not the wii u ?

BTW the PS4 and X1 probably features a 256 bit SIMD engine

Now that I look at the cpu more and more it's just modified ibm broadway or gekko only this time it has 3 cores with higher clock speeds and maybe some new instruction sets. That same ibm broadway processor also features the same 64 bit SIMD engine as the ibm gekko. Shame on nintendo for not moving on to a more advanced cpu architecure but instead they keep it practically the same as gamecube to get teh backwards compatibility.

We were talking about GPGPU, and then we strayed to how VLIW is inefficient compared to GCN. Later, I then asked why are you comparing Wii U (VLIW) to GCN when they're not the same architecture. I'm still confused on how you brought in "Gamecube" into this. I merely stated that GPGPU will play an important factor in all 3 consoles with GCN being easier to use than VLIW. If you thought I was talking about Gamecube, well your reading comprehension skills have failed you at that line. 

http://www.neogaf.com/forum/showpost.php?p=50767125&postcount=3756

Pathetic? Well, maybe compared to an i5 and a multi-hundred dollar CPU, but for a 32-bit x 2 Paired singles CPU (which is technically a "form" of SIMD) clocked at 1.24ghz (if we use Broadway as a base, just increase clocks) it's actually very competitive with a more modern design like Bobcat, which is the predecessor to the Jaguar (2 of the 8 cores are reserved) that we'll be seeing in PS4/X1. Core-for-core, taking advantage of eachother's capabilities (and based on these tests), Espresso and Jaguar should not be too far off.



forethought14 said:
fatslob-:O said:

 
@Bold I got it from this "And why are you comparing Wii U to GCN at all?"

Next time refer the wii u as VLIW.

I don't know about you bro but a 64 bit SIMD engine is pathetic the PS2 had a 128 bit SIMD engine so why not the wii u ?

BTW the PS4 and X1 probably features a 256 bit SIMD engine

Now that I look at the cpu more and more it's just modified ibm broadway or gekko only this time it has 3 cores with higher clock speeds and maybe some new instruction sets. That same ibm broadway processor also features the same 64 bit SIMD engine as the ibm gekko. Shame on nintendo for not moving on to a more advanced cpu architecure but instead they keep it practically the same as gamecube to get teh backwards compatibility.

We were talking about GPGPU, and then we strayed to how VLIW is inefficient compared to GCN. Later, I then asked why are you comparing Wii U (VLIW) to GCN when they're not the same architecture. I'm still confused on how you brought in "Gamecube" into this. I merely stated that GPGPU will play an important factor in all 3 consoles with GCN being easier to use than VLIW. If you thought I was talking about Gamecube, well your reading comprehension skills have failed you at that line. 

http://www.neogaf.com/forum/showpost.php?p=50767125&postcount=3756

Pathetic? Well, maybe compared to an i5 and a multi-hundred dollar CPU, but for a 32-bit x 2 Paired singles CPU (which is technically a "form" of SIMD) clocked at 1.24ghz (if we use Broadway as a base, just increase clocks) it's actually very competitive with a more modern design like Bobcat, which is the predecessor to the Jaguar (2 of the 8 cores are reserved) that we'll be seeing in PS4/X1. Core-for-core, taking advantage of eachother's capabilities (and based on these tests), Espresso and Jaguar should not be too far off.

You don't refer to VLIW as wii u, use proper codnames next time on your part to avoid confusion. 

As for my question of why I compare it to GCN, gpu architectures are getting more modernized by the days but it's not the wii u's fault plus it's going up against next gen consoles so it only far. If I want I could have compared it to fermi because there pretty similar and kepler isn't too far off even though it lost it's hardware schedular. 

If we take the fact that the ibm broadway can pull off 2.9 Gflops and scale it to the the ibm espresso it can only muster up 15 Gflops.

BTW what is featured in the PS4/X1 will be able to pull off around 100 Gflops.

You overestimated it's capabilites.

Edit: Holy crap I just realized something, The PS4/X1's cpu is 30% as strong as the WII U GPU. 

Edit 2: How does that sh#t happen like seriously I thought a gpu was supposed to be wayyyyy stronger than a cpu especially since it's VLIW which is like a god send for Gflops/watt and price. I think nintendo got ripped off by AMD LOLOL.

Edit 3: This thing has being destroyed by those new haswell CPU's which feature up to 400 Gflops of performance which is cpu alone and that's not even counting it's igpu. 

Edit 4: Dude this thing is weaker than a PS3 (flops wise ofcourse) which can pull off 400 Gflops combined compared to the WII U's 365 Gflops combined.

Edit 5: Now I seriously know why 4A games didn't bother with a port of metro last light seeing as how this is inadequate for any sort of rigid body physics in that game. 

BTW I don't hate the WII U I expect some good games on it too.



fatslob-:O said:

You don't refer to VLIW as wii u, use proper codnames next time on your part to avoid confusion. 

As for my question of why I compare it to GCN, gpu architectures are getting more modernized by the days but it's not the wii u's fault plus it's going up against next gen consoles so it only far. If I want I could have compared it to fermi because there pretty similar and kepler isn't too far off even though it lost it's hardware schedular. 

If we take the fact that the ibm broadway can pull off 2.9 Gflops and scale it to the the ibm espresso it can only muster up 15 Gflops.

BTW what is featured in the PS4/X1 will be able to pull off around 100 Gflops.

You overestimated it's capabilites.

Edit: Holy crap I just realized something, The PS4/X1's cpu is 30% as strong as the WII U GPU. 

Dude what the hell is wrong with you? Wii U's Latte likely has VLIW architecture, which is why I'm refering to it as "VLIW", what part of that don't you understand?

You're basing this on GFLOPS? Lol, in that case, next-generation CPUs are less powerful than Cell and only on-par with Xenon! Cell does over 200GFLOPS......Xenon does about 115 GFLOPS (can't remember the exact number). One thing's for sure, if Espresso was clock - for - clock with Xenon and Cell, and with 15GFLOPS, there's ABSOLUTELY no way it would run ANY games ported from those two consoles. But, you're forgetting that it's not the same architecture, nor is the same instruction set, nor is it.....plainly simple....similar. Completely different architectures. By the way...is the "over 100GFLOPS" including 8 cores or 6. Well, Let me put it this in the matmul SIMD tests per-core (I'll use blu's tests as a base, maybe off since I'm using those numbers as bases and actual performance may vary, but not significantly):

PowerPC 750CL Broadway @ 729mhz = 4446.666

PowerPC 750CL Espresso @ 1.24ghz = 7581.902

AMD Bobcat @ 1.3ghz = 5619.728

AMD Jaguar @ 1.6ghz - 2.0 ghz(let's put an extra 15% performance increase because that's what AMD documents say) = 7757.164 - 9696.455 (not exactly sure considering how it may be slightly modified in PS4/X1, so somewhere inbetween)

Well, if a measly Espresso can be this competitive with Jaguar....then Lol....



Around the Network
forethought14 said:
fatslob-:O said:

You don't refer to VLIW as wii u, use proper codnames next time on your part to avoid confusion. 

As for my question of why I compare it to GCN, gpu architectures are getting more modernized by the days but it's not the wii u's fault plus it's going up against next gen consoles so it only far. If I want I could have compared it to fermi because there pretty similar and kepler isn't too far off even though it lost it's hardware schedular. 

If we take the fact that the ibm broadway can pull off 2.9 Gflops and scale it to the the ibm espresso it can only muster up 15 Gflops.

BTW what is featured in the PS4/X1 will be able to pull off around 100 Gflops.

You overestimated it's capabilites.

Edit: Holy crap I just realized something, The PS4/X1's cpu is 30% as strong as the WII U GPU. 

Dude what the hell is wrong with you? Wii U's Latte likely has VLIW architecture, which is why I'm refering to it as "VLIW", what part of that don't you understand?

You're basing this on GFLOPS? Lol, in that case, next-generation CPUs are less powerful than Cell and only on-par with Xenon! Cell does over 200GFLOPS......Xenon does about 115 GFLOPS (can't remember the exact number). One thing's for sure, if Espresso was clock - for - clock with Xenon and Cell, and with 15GFLOPS, there's ABSOLUTELY no way it would run ANY games ported from those two consoles. But, you're forgetting that it's not the same architecture, nor is the same instruction set, nor is it.....plainly simple....similar. Completely different architectures. By the way...is the "over 100GFLOPS" including 8 cores or 6. Well, Let me put it this in the matmul SIMD tests per-core (I'll use blu's tests as a base, maybe off since I'm using those numbers as bases and actual performance may vary, but not significantly):

PowerPC 750CL Broadway @ 729mhz = 4446.666

PowerPC 750CL Espresso @ 1.24ghz = 7581.902

AMD Bobcat @ 1.3ghz = 5619.728

AMD Jaguar @ 1.6ghz - 2.0 ghz(let's put an extra 15% performance increase because that's what AMD documents say) = 7757.164 - 9696.455 (not exactly sure considering how it may be slightly modified in PS4/X1, so somewhere inbetween)

Well, if a measly Espresso can be this competitive with Jaguar....then Lol....

Xenon does 70 Gflops.

http://ca.ign.com/articles/2010/08/26/xbox-360-vs-playstation-3-the-hardware-throwdown

BTW bobcat is totally different from Jaguar. (The PS4/X1 can pull off 100 Gflops like I said earlier.)

http://www.reddit.com/r/PS4/comments/19d9yc/how_powerful_is_the_amd_jaguar_cpu_for_the_ps4/

Flops depends on SIMD engines and you know this 

Read my other edits at last post.

Edit: The architecture would have to be the same in order for the WII U to play WII games and plus they can't change anything drastic otherwise bye bye to backwards compatibility.



Okay, a few important things I need to add here. Firstly, Expresso isn't 3 x Broadways duct-tapped together with a raised clock (although if it were that wouldn't be a bad thing at all. We know almost as little about Expresso as we do about Latte. Secondly, Expresso surprised the Bink developers by running Bink 2 - Expresso is no slouch.

And lastly, and by no means least, you would be wrong not to expect mind blowing physics from Expresso. The link below demonstrates how well Broadway could handle physics if a developer put their mind to it:

http://www.youtube.com/watch?v=41w-bbtVFKE



Watch Dogs & Call Of Duty: Ghosts "Wii U" will look like the other versions.



snowdog said:
Okay, a few important things I need to add here. Firstly, Expresso isn't 3 x Broadways duct-tapped together with a raised clock (although if it were that wouldn't be a bad thing at all. We know almost as little about Expresso as we do about Latte. Secondly, Expresso surprised the Bink developers by running Bink 2 - Expresso is no slouch.

And lastly, and by no means least, you would be wrong not to expect mind blowing physics from Expresso. The link below demonstrates how well Broadway could handle physics if a developer put their mind to it:

http://www.youtube.com/watch?v=41w-bbtVFKE

How about a nicer framerate ?



fatslob-:O said:
snowdog said:
Okay, a few important things I need to add here. Firstly, Expresso isn't 3 x Broadways duct-tapped together with a raised clock (although if it were that wouldn't be a bad thing at all. We know almost as little about Expresso as we do about Latte. Secondly, Expresso surprised the Bink developers by running Bink 2 - Expresso is no slouch.

And lastly, and by no means least, you would be wrong not to expect mind blowing physics from Expresso. The link below demonstrates how well Broadway could handle physics if a developer put their mind to it:

http://www.youtube.com/watch?v=41w-bbtVFKE

How about a nicer framerate ?



Lol...really..? Over 400 objects and you expect 30fps..? The framerate is fine in the second part with zero gravity when he turns the hoover thingy on. For a console that's closer to a 6th gen machine than a 7th gen machine in terms of power that physics demonstration is impressive.