By using this site, you agree to our Privacy Policy and our Terms of Use. Close
PhalanxCO said:
demonfox13 said:
Oh yea here you go, Wii from a technical standpoint is inferior to the original Xbox
http://wii.ign.com/articles/699/699118p1.html. Ati Hollywood custom video card is roughly equivalent to an Ati R600 and just like it, it is capable of HDR+AA http://www.gpgpu.org/forums/viewtopic.php?p=22227, whereas the 360 card is closer to a x1900xt while the PS3's card is a beefed up 7800gtx that can perform closer to an 8800 but not quite.

Given your signature, I find your posts incredibly ironic.  Oh well, you're banned anyway.

I had access to the Developer docs for the Gamecube last gen and I know exactly how the GPU unit worked.  The Xbox's T&L engine was a little more powerful than the GC's, but when it came to texturing, fill rate and other effects, the GC could eat the Xbox for breakfast, not to mention that Xbox's architecture was extremely inefficient.  If you had any idea what you were talking about and at least a modicum of desire to do research, you would realize this.

That's just the Gamecube.  The Wii is several orders of magnitude more powerful than the GC (at least twice, plus the infinite power of duct tape).  The biggest limitation the GC had was that you had to use the existing shader libraries to acheive various effects.  However, this appears to have been changed in the Wii's GPU because High Voltage mentions several times that they wrote their own shader libraries.

The Wii is far more powerful than most people realize because it has a unique architecture that very few people have ever taken the time to figure out.

Well, I don’t think Nintendo really changed the shading pipeline or opened things up at all as much as High Voltage probably put in effort to understand the TEV unit. The TEV unit isn’t much different from a texture combiner, which was a common approach to "shaders" in graphics cards like the Geforce 2 (and older). Texture combiners were actually much more efficient and powerful than the programmable shaders that came after them, but the problem was Pixel Combiners were very hardware specific and you had to handle them differently between manufacturers; and often you had to handle them differently when a manufacturer released a new GPU. This meant that (outside of people developing demos for these cards) almost no developers ever bothered to learn how to use texture combiners, and trying to get a developer who is used to programmable pixel shaders to try a pixel combiner is nearly impossible.