By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

curl-6 said:
starworld said:

the ps2 and xbox was as different as they come, yet xbox used brute force run most ports and run them superior, here is a quote from a developer that developed for all 3 consoles, different architecture  can be a problem if both consoles are close to each other power wise.

"FWIW I don't ever consider any developer who ships anything lazy.
When you're building a cross platform game, there is always an element of lowest common denominator, it's about costs (and I don't just mean financial).
PS2 was often the "lead SKU" at big publishers because of the installed base, Xbox was a version you had to do, in most cases you could write a simple version of your renderer and just drop the assets on Xbox and they would usually run faster. So you'd increase texture quality and call it done.
Usually when you dropped it on gamecube it would run slower and you'd have no memory left, so you downsample to make things fit, figure out how you could use ARAM without crippling performance and ship it.

If you wrote an XBox exclusive with no intention of ever shipping on PC, and you actually spent time optimizing there was a lot of performance to be had, usually most titles were CPU limited because then the polygon indices had to be copied into the GPU ring buffer (which wasn't actually a ring buffer). If your app was pushing a lot of geometry it could literally spend 60% of it's time doing nothing but linear memory copies.
It was possible to place jumps into the ringbuffer, to effectively "call" static GPU buffers, but it was tricky to get right because of the pipeline and the fact you had to patch the return address as a jump into the buffer so you'd have to place fences between calls to the same static buffer. 
If you did this however you could trivially saturate the GPU and produce something much better looking.

On GameCube the biggest issue is it was just had pathetic triangle throughput, the 10M polygons per second (I don't remember the real number) assumes you never clip or light anything.
GameCube was DX7 class hardware for the most part, albeit a more fully featured version than ever shipped in a PC. The GPU just wasn't very fast.
As I said it's real benefit was the memory architecture and I still feel it was over engineered.
On the whole it wasn't a bad machine, but I wouldn't have said it was "more powerful than PS2)"

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.



Around the Network
starworld said:
curl-6 said:

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.

GC used a DX7-style fixed function pipeline, where certain effects are hardwired in. The catch is, these effects could be done very, very efficiently because the hardware was designed to do those specific effects very well. Xbox used early DX8 tech, which meant more flexibility, as you could program shaders freely, but at the cost of efficiency, as the system was not as tailored to individual effects.

Things like EMBM and other multitexturing effects could be done on GC, and Wii, with very little performance hit, while duplicating them with Xbox's hardware was slower. 

It's essentially a matter of specialised vs flexible.

As for a source, googling tech articles from over a decade ago is a pain; I tried, but between all the dead links and unfounded speculation it was too much of a chore.



starworld said:
curl-6 said:
starworld said:

the ps2 and xbox was as different as they come, yet xbox used brute force run most ports and run them superior, here is a quote from a developer that developed for all 3 consoles, different architecture  can be a problem if both consoles are close to each other power wise.

"FWIW I don't ever consider any developer who ships anything lazy.
When you're building a cross platform game, there is always an element of lowest common denominator, it's about costs (and I don't just mean financial).
PS2 was often the "lead SKU" at big publishers because of the installed base, Xbox was a version you had to do, in most cases you could write a simple version of your renderer and just drop the assets on Xbox and they would usually run faster. So you'd increase texture quality and call it done.
Usually when you dropped it on gamecube it would run slower and you'd have no memory left, so you downsample to make things fit, figure out how you could use ARAM without crippling performance and ship it.

If you wrote an XBox exclusive with no intention of ever shipping on PC, and you actually spent time optimizing there was a lot of performance to be had, usually most titles were CPU limited because then the polygon indices had to be copied into the GPU ring buffer (which wasn't actually a ring buffer). If your app was pushing a lot of geometry it could literally spend 60% of it's time doing nothing but linear memory copies.
It was possible to place jumps into the ringbuffer, to effectively "call" static GPU buffers, but it was tricky to get right because of the pipeline and the fact you had to patch the return address as a jump into the buffer so you'd have to place fences between calls to the same static buffer. 
If you did this however you could trivially saturate the GPU and produce something much better looking.

On GameCube the biggest issue is it was just had pathetic triangle throughput, the 10M polygons per second (I don't remember the real number) assumes you never clip or light anything.
GameCube was DX7 class hardware for the most part, albeit a more fully featured version than ever shipped in a PC. The GPU just wasn't very fast.
As I said it's real benefit was the memory architecture and I still feel it was over engineered.
On the whole it wasn't a bad machine, but I wouldn't have said it was "more powerful than PS2)"

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.

Lucasarts canned the their Rogue Squadron qompilaton port for Xbox because they couldn't get it to run. The Xbox wasn't the be all end all of the 6th Gen. There were certain things the GC could do that the Xbox could not. The GC's GPU had superior lighting and multitexturing capability. The Xbox's GPU had a higher fillrate and programmable shaders. The problem with the Xbox GPU was that it's fillrate advantage started to disappear once all of the GPU's effects turned on. The ArtX designed GPU of the GC was optimized to run with all of it's effects on. 



Darc Requiem said:
starworld said:
curl-6 said:
starworld said:

the ps2 and xbox was as different as they come, yet xbox used brute force run most ports and run them superior, here is a quote from a developer that developed for all 3 consoles, different architecture  can be a problem if both consoles are close to each other power wise.

"FWIW I don't ever consider any developer who ships anything lazy.
When you're building a cross platform game, there is always an element of lowest common denominator, it's about costs (and I don't just mean financial).
PS2 was often the "lead SKU" at big publishers because of the installed base, Xbox was a version you had to do, in most cases you could write a simple version of your renderer and just drop the assets on Xbox and they would usually run faster. So you'd increase texture quality and call it done.
Usually when you dropped it on gamecube it would run slower and you'd have no memory left, so you downsample to make things fit, figure out how you could use ARAM without crippling performance and ship it.

If you wrote an XBox exclusive with no intention of ever shipping on PC, and you actually spent time optimizing there was a lot of performance to be had, usually most titles were CPU limited because then the polygon indices had to be copied into the GPU ring buffer (which wasn't actually a ring buffer). If your app was pushing a lot of geometry it could literally spend 60% of it's time doing nothing but linear memory copies.
It was possible to place jumps into the ringbuffer, to effectively "call" static GPU buffers, but it was tricky to get right because of the pipeline and the fact you had to patch the return address as a jump into the buffer so you'd have to place fences between calls to the same static buffer. 
If you did this however you could trivially saturate the GPU and produce something much better looking.

On GameCube the biggest issue is it was just had pathetic triangle throughput, the 10M polygons per second (I don't remember the real number) assumes you never clip or light anything.
GameCube was DX7 class hardware for the most part, albeit a more fully featured version than ever shipped in a PC. The GPU just wasn't very fast.
As I said it's real benefit was the memory architecture and I still feel it was over engineered.
On the whole it wasn't a bad machine, but I wouldn't have said it was "more powerful than PS2)"

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.

Lucasarts canned the their Rogue Squadron qompilaton port for Xbox because they couldn't get it to run. The Xbox wasn't the be all end all of the 6th Gen. There were certain things the GC could do that the Xbox could not. The GC's GPU had superior lighting and multitexturing capability. The Xbox's GPU had a higher fillrate and programmable shaders. The problem with the Xbox GPU was that it's fillrate advantage started to disappear once all of the GPU's effects turned on. The ArtX designed GPU of the GC was optimized to run with all of it's effects on. 

no way, was GC lighting more superior.



curl-6 said:
starworld said:
curl-6 said:

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.

GC used a DX7-style fixed function pipeline, where certain effects are hardwired in. The catch is, these effects could be done very, very efficiently because the hardware was designed to do those specific effects very well. Xbox used early DX8 tech, which meant more flexibility, as you could program shaders freely, but at the cost of efficiency, as the system was not as tailored to individual effects.

Things like EMBM and other multitexturing effects could be done on GC, and Wii, with very little performance hit, while duplicating them with Xbox's hardware was slower. 

It's essentially a matter of specialised vs flexible.

As for a source, googling tech articles from over a decade ago is a pain; I tried, but between all the dead links and unfounded speculation it was too much of a chore.

dude you say xbox would choke on gamecube games, yet most people say resident evil 4 is the best looking GC and the best looking 6th gen game, yet the ps2 version almost looked as good and that game was tailord to GC strength, i remember everybody saying it would look like crap on ps2, and it came out looking amazing. here is a pic from ign so we know its not a source trying to make the game look worse then it is.



Around the Network
starworld said:

dude you say xbox would choke on gamecube games, yet most people say resident evil 4 is the best looking GC and the best looking 6th gen game, yet the ps2 version almost looked as good and that game was tailord to GC strength, i remember everybody saying it would look like crap on ps2, and it came out looking amazing. here is a pic from ign so we know its not a source trying to make the game look worse then it is.

RE4 on PS2 had lower polygon counts, missing lighting effects, and lower resolution textures compared to the GC version, and it was not as tailored to GC as the Rogue Squadron games were. On a technical level, Rogue Squadron 2 and 3 >  RE4.



curl-6 said:
starworld said:

dude you say xbox would choke on gamecube games, yet most people say resident evil 4 is the best looking GC and the best looking 6th gen game, yet the ps2 version almost looked as good and that game was tailord to GC strength, i remember everybody saying it would look like crap on ps2, and it came out looking amazing. here is a pic from ign so we know its not a source trying to make the game look worse then it is.

RE4 on PS2 had lower polygon counts, missing lighting effects, and lower resolution textures compared to the GC version, and it was not as tailored to GC as the Rogue Squadron games were. On a technical level, Rogue Squadron 2 and 3 >  RE4.

Like i said before this none sense there is no way to able to tell this, this is not a fact. anyway here is a good conmparison  video of resident evil 4 for ps2 and gamecube, yes the gamecube version looks better and ps2 has less branches and trees but its still looks great, its something you would see in a 360 vs ps3 comparisons, now when you compare splinter cell choase theory to the gamecube/ps2 you can see it looks like crap.

http://www.youtube.com/watch?v=KkKX-nU9fX4



starworld said:
curl-6 said:
starworld said:

the ps2 and xbox was as different as they come, yet xbox used brute force run most ports and run them superior, here is a quote from a developer that developed for all 3 consoles, different architecture  can be a problem if both consoles are close to each other power wise.

"FWIW I don't ever consider any developer who ships anything lazy.
When you're building a cross platform game, there is always an element of lowest common denominator, it's about costs (and I don't just mean financial).
PS2 was often the "lead SKU" at big publishers because of the installed base, Xbox was a version you had to do, in most cases you could write a simple version of your renderer and just drop the assets on Xbox and they would usually run faster. So you'd increase texture quality and call it done.
Usually when you dropped it on gamecube it would run slower and you'd have no memory left, so you downsample to make things fit, figure out how you could use ARAM without crippling performance and ship it.

If you wrote an XBox exclusive with no intention of ever shipping on PC, and you actually spent time optimizing there was a lot of performance to be had, usually most titles were CPU limited because then the polygon indices had to be copied into the GPU ring buffer (which wasn't actually a ring buffer). If your app was pushing a lot of geometry it could literally spend 60% of it's time doing nothing but linear memory copies.
It was possible to place jumps into the ringbuffer, to effectively "call" static GPU buffers, but it was tricky to get right because of the pipeline and the fact you had to patch the return address as a jump into the buffer so you'd have to place fences between calls to the same static buffer. 
If you did this however you could trivially saturate the GPU and produce something much better looking.

On GameCube the biggest issue is it was just had pathetic triangle throughput, the 10M polygons per second (I don't remember the real number) assumes you never clip or light anything.
GameCube was DX7 class hardware for the most part, albeit a more fully featured version than ever shipped in a PC. The GPU just wasn't very fast.
As I said it's real benefit was the memory architecture and I still feel it was over engineered.
On the whole it wasn't a bad machine, but I wouldn't have said it was "more powerful than PS2)"

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.


well, tis guy has reviwe what you can find elsewhere

http://www.purevideogames.net/blog/?p=479

 

"

1). The numbers in the specsheets appear higher for Xbox than GameCube, so that must mean it’s better.

2). Microsoft, or [insert magazine or website here] said so.

NOT ONCE have I actually talked to someone believing this propaganda that actually found out the Xbox was more powerful thru a proper benchmark test, or by matching up individual components of the machines to see how they fare against each other in their respective operations. Usually I end up talking to some guy that works at EB or something and ask them what they think, and they say the same thing- they heard it from somewhere else, or saw it on a website that knows next to nothing about the tech of these consoles.

Best-looking console game to-date? Damn skippy. 15 million polygons per second, anyone?

So who’s to say what is most powerful?

Personally I’m quite sure Xbox and GameCube are VERY identical in terms of polygon performance and effects, after looking at the facts on each system’s abilities, though I’m led to believe that Xbox might not be as powerful as everyone thinks graphicswise, especially since Microsoft avoided posting REALWORLD PERFORMANCE NUMBERS (the polygon performance you get in an actual game, not a demo test). Nintendo posted a very generous realworld number of 6-12 million polys/sec, which was surpassed in one of its own launch games at 15mps (StarWars Rogue Leader, which is still currently the most polygons displayed in a game to-date).

So, Microsoft states Xbox can push 120+million odd polys/sec with no effects as RAW polygons, and Nintendo eventually posted that GameCube’s theoretical maximum was 90 million polys/sec with effects (1 texture, 1 infinite hardware light). Microsoft’s numbers appear a cool 30 million polys/sec higher than Nintendo’s, but why do current games barely push over 10mps on this “all powerful” Xbox, and 5 games have already matched 15mps on GameCube (originally started by the Rogue Leader launch game)??

For one, Microsoft’s numbers are indeed inflated. The Xbox’s fillrate is nowhere NEAR 4 Gtexels/sec (more like 250-750 Mtexels, according to developers). Xbox’s system bandwidth isn’t a true 6.4GB/sec, considering any info from the CPU to the GPU and vice-versa is bottlenecked at 1.02GB/sec; one-third of GCN’s overall system bandwidth in realtime. Xbox’s GPU also requires 16MB of the 64MB DDR just to cull a Z-buffer (which is embedded on the GCN GPU at no cost to system memory), and also GCN’s internal GPU bandwidth is more than twice that of Xbox’s (25GB/sec compared to 10GB/sec). Also, Xbox claims to have more effects than GameCube, and better texturing ability in its GPU, when the XGPU can only do 4 texture layers per pass, and only 4 infinite hardware lights per pass (8 local lights can be done, also). GCN, on the other hand, boasts 8 texture layers per pass, and 8 infinite hardware lights and local lights per pass, all realtime.

What this means is that while Xbox relies on vertex shaders and pixel shaders (which BTW are absent from GCN hardware) to do realtime bumpmapping, the same effect is done in hardware on GameCube via it’s texture layers. Xbox also must deal with texture layers per bumpmapped surface per scene, though.

Also this whole processor thing is quite twisted considering Xbox and GameCube are two TOTALLY DIFFERENT architecures (32/64-bit hybrid, PowerPC native compared to 32-bit Wintel). GameCube, having this architecture, has a significantly shorter data pipeline than Xbox’s PIII setup (4-7 stages versus up to 14), meaning it can process information more than twice as fast per clock cycle. In fact, this GCN CPU (a PowerPC 750e IBM chip) is often compared to be as fast as a 700mhz machine at 400mhz. So GCN could be 849mhz compared to Xbox’s 733mhz machine performancewise.

Not ONCE do you hear this fact stated by Microsoft’s PR, nor do you see anything listed that Xbox can be “beat in” on their official specs (no realworld poly count, no realworld fillrate, no listing of simulataneous texture layers/hardware lights per pass, no mentioning that pixel/vertex shaders only do bumpmapping and skinning commonly done on all games now)…

 

One of GameCube's best water displays with water refraction/reflection maps

 

One of Xbox's best water displays without water refraction/reflection maps

 

Now, don’t get me wrong; I love my Xbox, but there’s no way we’re EVER going to see more than 30 million poly/sec games in this console’s lifespan, and neither will GameCube. Dead or Alive 3, a game Tecmo said “was impossible on any system other than Xbox” due to the amount of polygons onscreen, is a 9-10mps game, tops. The character models (which were also claimed to be an impossibility elsewhere) consisted of 9,000 polygons each- the same amount of polygons in characters in StarFox Adventures, Eternal Darkness, and even in Luigi’s Mansion (end boss). Resident Evil 0, however, boasts the highest polygonal “low-end” model to-date- a whopping 25,000 poly character. Now why is this possible (even against prerendered backgrounds) on a “less techincal” console? Why isn’t Xbox smothering GCN to death with games that are impossible to be done on any other console?

"



starworld said:
curl-6 said:
starworld said:
curl-6 said:

That dev clearly wasn't very competent at Gamecube development then, as Factor 5 managed 15m polygons a second at launch, and bettered that with 20m two years later.

GC's shader system could do things PS2 could only dream of, and while it was less flexible than Xbox's shader system, the effects it was hardwired for, like EMBM, it could do very, very efficiently. This  is how games like Rogue Squadron 2 and 3, and later on Wii Mario Galaxy 1 & 2 and Jett Rocket, were able to spam certain effects to a level that would choke the Xbox.

source? there is nothing on GC that would choke the xbox this has to be a joke, double steel second clash runs at 720p/60fps with amazing graphics, the reason GC was 199$ cause it was cheap hardware it produced nice graphics, but the xbox costed almost twice as much. was siginificantly more powerful hardware.

GC used a DX7-style fixed function pipeline, where certain effects are hardwired in. The catch is, these effects could be done very, very efficiently because the hardware was designed to do those specific effects very well. Xbox used early DX8 tech, which meant more flexibility, as you could program shaders freely, but at the cost of efficiency, as the system was not as tailored to individual effects.

Things like EMBM and other multitexturing effects could be done on GC, and Wii, with very little performance hit, while duplicating them with Xbox's hardware was slower. 

It's essentially a matter of specialised vs flexible.

As for a source, googling tech articles from over a decade ago is a pain; I tried, but between all the dead links and unfounded speculation it was too much of a chore.

dude you say xbox would choke on gamecube games, yet most people say resident evil 4 is the best looking GC and the best looking 6th gen game, yet the ps2 version almost looked as good and that game was tailord to GC strength, i remember everybody saying it would look like crap on ps2, and it came out looking amazing. here is a pic from ign so we know its not a source trying to make the game look worse then it is.

 

 

lol, gamecube and wi resident evil 4 were identical, the ps2 you psted seems from gamecube, you only changed the logo

here

http://www.gameswelike.com/web/re/RE%20Comp.htm

 

"

The PS2 version of RE4 is a fair imitation of the GC one, and we compliment it on the fact that it runs just as smoothly as the original.  But it does make a lot of graphical compromises.  The biggest one is the loss of most of the lighting.  The PS2 version sacrifices the ambient world lighting, and also much of the dynamic lighting.  For example, take these 3 shots below.  On the left, Leon is first lit by the muzzle flash from his shotgun, and then by the barrel explosion.  On the right, there is no light from the muzzle flash on Leon's face, and the explosion is somewhat more subdued.


 

There's also a lot of fairly subtle geometry loss in the PS2 game.  For example, trees have fewer branches.  Take a look at these comparison shots:

 

 

Character models suffer from lower polygon counts and texture detail, as well.  Take these dogs, for instance.  These shots are also another example of the dull uniform lighting on the PS2.

 

 

Here's another example of compromise, this time in a video clip.  In this room, the PS2 version is missing the lighting from the torch behind Leon, and the water on the floor as well.  Click the picture for video; it switches versions halfway through.

 

One more video, this one shows another difference in dynamic lighting--lightning strikes!  The lightning on Gamecube looks incredible; illuminating the whole scene, and casting shadows.  On PS2, it's easy to not notice the lightning at all.  Below are before/after pictures, click for video.

 


 

"



tanok said:

dude i posted a HD comparisons video tahts shows the differnce watch it, the ps2 version still looks great though.

http://www.youtube.com/watch?v=KkKX-nU9fX4