By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Original XBOX performance

OlfinBedwere said:

It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure)

No it doesn't. People need to stop believing this.

AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.

Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751

They are both Graphics Core Next.
Again, Flops is irrelevant.

FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.

OlfinBedwere said:

I think that had as much to do with the fact that probably 90% of games from that generation were designed with the PS2 in mind, and just given bumps to resolution, texture filtering quality and anti-aliasing when they were brought over to the Xbox and Gamecube. Microsoft and Nintendo had a better understanding of what their console's strengths were, and so designed their first-party titles to take advantage of them.

Ports over from the PC to the Xbox did shine on the Xbox though.

Azzanation said:

In comparison the GCs CPU blew the doors off the Pentium 3 processor. P3 was only a 32bit CPU compared to the GCs 128bit processor plus the IBM made dolphin CPU was one of the worlds smallest designs meaning its pipelines were superiour. The P3 was a decent CPU mainly due to its high Ghz.

Nah. - You are very wrong.
For one, Gekko is a 32bit not a 128bit processor.
https://en.wikipedia.org/wiki/Gekko_(microprocessor)

Bits do not correspond to performance either... Most games wouldn't have leveraged the 64bit registers anyway either as it would consume more Ram.
Bits are not a representation of performance.

The Celeron 733mhz, Again beats the 500Mhz PowerPC equivalent of the Gamecubes CPU... That ignores the fact the Gamecubes CPU is clocked lower and the Xbox's CPU has performance enhancements over that.
The P6 derived core of Intels chips typically always had a pretty decent industry leading edge for the most part.

Even Anandtech recognizes the Intel chip would be superior. https://www.anandtech.com/show/858/2


Azzanation said:

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

There are many aspects where the Gamecubes GPU is better than the Xbox's GPU.
But there are many aspects where the Xbox's GPU is better than the Gamecubes.

However... The strengths of the Xbox GPU tended to outweigh it's limitations, hence why it's games were typically a step up overall.

Azzanation said:

Xbox had superior ports, that's a given due to its X64 architecture same as PCs at the time which means porting was simply.

More false information.
The Xbox CPU is not x64, 64bit extensions weren't tacked on until Intel adopted AMD x86-x64 extensions (AKA. EM64T) with the Prescott variant of the Pentium 4.
https://en.wikipedia.org/wiki/X86-64
https://en.wikipedia.org/wiki/Pentium_4#Prescott

The Xbox CPU is x86 as it's Pentium 3 derived.
https://en.wikipedia.org/wiki/Pentium_III
https://en.wikipedia.org/wiki/Xbox_technical_specifications

Azzanation said:

However games built from the ground up with GC in mind struggled to run on the Xbox. There was an old article from Factor 5 saying the Xboxs hardware could not render Rogue Leader at a comfortable frame rate (below 30 frames) compared to the silky smooth 60 frame GC version. Unfortunately i cannot find that article anymore. So i guess thats just my word at this stage.

Because Rogue Leader leveraged the Gamecubes GPU strengths rather than the Xbox's.
If you were to make a shader heavy game that gobbled up Ram like no tomorrow... The Gamecube would also struggle.

HoloDust said:

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle

They could be including Vertex performance in that calculation.
Citing nVidia is a pretty dubious affair, because nVidia will want to fluff up numbers as much as possible.

Last edited by Pemalite - on 16 September 2018

--::{PC Gaming Master Race}::--

Around the Network

To correct earlier assumptions, the "Gekko" CPU that powers the Gamecube is 32-Bit.  The "Flipper" GPU is 64-Bit.



Pemalite said:
HoloDust said:

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle

They could be including Vertex performance in that calculation.
Citing nVidia is a pretty dubious affair, because nVidia will want to fluff up numbers as much as possible.

Yeah - I remember around that time people were using some wild numbers, 400 or so GFLOPS. Which when you get down to math, it really comes to something as silly as that:

(24x27FLOPS + 8x10FLOPS)*550MHz = 400GFLOPS

And those 27 per cycle and 10 per cycle numbers are indeed from official nVidia document - or it seems so...   https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-7800-GTX-GPU-Review/Hardware-Details

Yet again, 192GFLOPS for whole GPU is from official nVidia document as well...  https://www.nvidia.com/content/PDF/tegra_white_papers/Tegra_K1_whitepaper_v1.0.pdf#page=18

 

This is the very reason why i said I would love to see math behind those numbes for GC and XBOX - because, without knowing underlying architectures it's just guesswork, and given base config (4:1:4:4@162MHz  vs 4:2:8:4@233MHz), some of those numbers look...well, quite silly.



Pemalite said:

No it doesn't. People need to stop believing this.

AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.

Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751

They are both Graphics Core Next.
Again, Flops is irrelevant.

FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.

I thought the base PS4 and Xbox One GPUs were only one GCN generation behind those of the Pro and One X, but on further investigation, turns out it's actually two. So yeah, the performance difference is probably a fair bit more than the raw FLOPS value alone would indicate (and that's before you take into account that the One X's new memory set-up completely blows the doors off those of the older models).



Pemalite said:
SammyGiireal said:

All I  can remember is the GC having the sweetest looking water in games. But I seriously doubt the GameCube could run Halo 2, Dead or Alive, Half Life 2, Forza, Ninja Gaiden, etc. The Xbox was a beast. I must mention here though that RE4 in the GC totally murders the PS2 version graphically. I bought the PS2 version for the extra content but it was a step down from the gorgeous GC version.

The Original Xbox is able to have better shadered water than the Gamecube... But the Gamecube can have better textured water.

However... To be fair, the Gamecube is technically capable of every graphics effect that the Original Xbox is capable of, it just requires additional passes or workarounds in order to achieve it... Which let's face it, generally never happened until the Wii came along anyway and developers had years more to extract from it's similar architecture.


Was the GC documentation even available in english then ? IIRC Nintendo was extremely late to the party when it came to international documentation for devs, which would have made it much easier for western devs to get the full power of the OG Xbox compared to the GC.

Edit : tried to check it out it seems false. My terrible memory at it again, sorry.

Last edited by RenCutypoison - on 17 September 2018

Around the Network
Pemalite said: 
Azzanation said:

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

There are many aspects where the Gamecubes GPU is better than the Xbox's GPU.
But there are many aspects where the Xbox's GPU is better than the Gamecubes.

However... The strengths of the Xbox GPU tended to outweigh it's limitations, hence why it's games were typically a step up overall. 

Well that's just the thing, it seems what ever game focused on said console will outperform the other. However I strongly disagree when people claim the Xbox to be the most powerful console of the 6th gen. In my opinion it was the GC. I believe the GC could run any Xbox game where as I see the Xbox struggling to run games built around the GC's hardware. As examples, Splinter Cell was designed around the Xbox hardware and the GC could run it (Not as good) but far from broken, where as the Xbox judging by Rogue Leader could barely do half the frame rate of the GC version, hence why there was no Xbox port. Now that's just rumours stated from Factor 5 at the time.

Xbox also had better multiplats because its design was very similar to PCs where as the GC like most consoles were alienated and were a little harder to work with, so in many cases the lead platform was Xbox.

I also strongly disagree when people say the Xbox could render the better looking water.. I find it that gen that the best looking water in games were on the GC. Games like Mario Sunshine looked absolutely amazing and the GC was actually rendering the waves, it wasn't just a texture placed ontop of another texture to make the water detail look good and mimic waves, it actually did waves. Also Wave Race Blue Storm still has some of the best wave effects iv seen apart from Black Flag and Sea of Thieves and that game was made 17 years ago.

https://www.youtube.com/watch?v=4q7qMwe3_zk

I just find the Xbox's design just wasn't as good as the GCs. I find the Xbox had bigger bottle necks when it came to things where as the GC had a perfect blend between CPU and GPU. Xbox was all about brute force and basically was a supercharged PS2 however the GC was a cleverly designed machine capable of much more with less horse power.

Also keep in mind that Nintendo was very honest with there numbers of the GC, claiming low poly figures but with using a bunch of effects where as PS2 and Xbox claimed they could do more polygons but without any effects, basically wireframe mode. 



Azzanation said:

Well that's just the thing, it seems what ever game focused on said console will outperform the other. However I strongly disagree when people claim the Xbox to be the most powerful console of the 6th gen. In my opinion it was the GC. I believe the GC could run any Xbox game where as I see the Xbox struggling to run games built around the GC's hardware.

Doom 3, Chronicles of Riddick, Morrowind...to name a few...I doubt they could've run on GC without some serious cutbacks.

Yes, GC was polygon pusher, but XBOX had more capable and modern GPU.



I'm no tech guy but I had a GameCube and a PS2 that generation. The GameCube definitely had some great looking software but I don't think it ever had the best version of any multi-platform games aside from a few occasions (ie: RE4 after it got ported to PS2). There were a few times where the GameCube port was so bad, it was embarrassing. For example, I bought True Crime for the "Cube. It ran okay but I noticed that there was music listed in the manual that I never heard. I swapped it for the PS2 version. There was more music, more missions, better performance, etc. I was actually mad at how poorly the GameCube was being treated. Other examples included SSX, Splinter Cell (whole levels had to be redesigned for the Cube and PS2).

That could come down to storage media and how much effort went into a port, though.



Another example of how mulriplats can't really be the deciding factor: I remember reading that the Xbox version of MGS2 was way less capable than the PS2 version.



To summon up, Xbox was much stronger than GC and PS2 with a wide margin.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."