Quantcast
Original XBOX performance

Forums - Microsoft Discussion - Original XBOX performance

Pemalite said:

Sometimes people will add other blocks of the GPU into the equation to inflate numbers. (I.E. Geometry.)
But to keep in mind... That the "Flops" of the Xbox/Dreamcast/Gamecube/Playstation 2 will actually be different to that of the Xbox 360/Xbox One/Playstation 3/Playstation 4 as it's operating at a different precision.

It just reinforces the fact that flops is a useless metric.

It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure), but the XB/DC/GC/PS2 all had completely different GPU designs, making it a lot less useful.

John2290 said:
All I remember was that xbox was the strongest game console by a small margin and in person the gains were slightly noticeable yet there was a different style to the first party titles. Whatever the specs gain as I believe the style of the games had more of an impact, at least at the time.

I think that had as much to do with the fact that probably 90% of games from that generation were designed with the PS2 in mind, and just given bumps to resolution, texture filtering quality and anti-aliasing when they were brought over to the Xbox and Gamecube. Microsoft and Nintendo had a better understanding of what their console's strengths were, and so designed their first-party titles to take advantage of them.



Around the Network
Pemalite said: 
Azzanation said:

Game Cube to me always has overall better looking games. I never owned a OG Xbox but from what i saw, the Xbox lacked effects that the GameCube could dish out. From memory i believed the Cube could render 8 different effects per ploygon compared to the PS2s 2 effects per polygon. Not sure what the Xbox was doing but i was also believed it was less, around the 4 or 6 mark.

That generally stems from Nintendo's pretty talented art direction.
But from a technical perspective, Original Xbox games were the best of that console generation.

The Original Xbox also had a pretty potent CPU as well... Which meant that Physics started to become more prominent in games. (Half Life 2 for example)
And the GPU was such a big step up that there were a handful of games that operated in High Definition.

In comparison the GCs CPU blew the doors off the Pentium 3 processor. P3 was only a 32bit CPU compared to the GCs 128bit processor plus the IBM made dolphin CPU was one of the worlds smallest designs meaning its pipelines were superiour. The P3 was a decent CPU mainly due to its high Ghz. 

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

Xbox had superior ports, that's a given due to its X64 architecture same as PCs at the time which means porting was simply. However games built from the ground up with GC in mind struggled to run on the Xbox. There was an old article from Factor 5 saying the Xboxs hardware could not render Rogue Leader at a comfortable frame rate (below 30 frames) compared to the silky smooth 60 frame GC version. Unfortunately i cannot find that article anymore. So i guess thats just my word at this stage. 



tripenfall said:
HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

This article offers a formula but it's gamespot so take it with a grain of salt. 

 

https://www.gamespot.com/articles/console-specs-compared-xbox-one-x-ps4-pro-switch-a/1100-6443665/

 

Their formula is 

The basic formula for computing teraFLOPS for a GPU is:

(# of parallel GPU processing cores multiplied by peak clock speed in MHz multiplied by two) divided by 1,000,000

Let's see how we can use that formula to calculate the teraFLOPS in the Xbox One. The system's integrated graphics has 768 parallel processing cores. The GPU's peak clock speed is 853MHz. When we multiply 768 by 853 and then again by two, and then divide that number by 1,000,000, we get 1.31 teraFLOPS.

Anyone care to weigh in on this formula I honestly have no idea... 

Yeah, this is usual formula that's been used for quite some time - and it is correct for more modern GPU architectures - I'd say, anything from ATI's unified shaders onward...

Pemalite said:
HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

Number of cores * Number of SIMD units * ((Number of mul-add units*2) + Number of mul units) * Clockrate.

Sometimes people will add other blocks of the GPU into the equation to inflate numbers. (I.E. Geometry.)
But to keep in mind... That the "Flops" of the Xbox/Dreamcast/Gamecube/Playstation 2 will actually be different to that of the Xbox 360/Xbox One/Playstation 3/Playstation 4 as it's operating at a different precision.

It just reinforces the fact that flops is a useless metric.

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle


OlfinBedwere said:

It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure)

No it doesn't. People need to stop believing this.

AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.

Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751

They are both Graphics Core Next.
Again, Flops is irrelevant.

FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.

OlfinBedwere said:

I think that had as much to do with the fact that probably 90% of games from that generation were designed with the PS2 in mind, and just given bumps to resolution, texture filtering quality and anti-aliasing when they were brought over to the Xbox and Gamecube. Microsoft and Nintendo had a better understanding of what their console's strengths were, and so designed their first-party titles to take advantage of them.

Ports over from the PC to the Xbox did shine on the Xbox though.

Azzanation said:

In comparison the GCs CPU blew the doors off the Pentium 3 processor. P3 was only a 32bit CPU compared to the GCs 128bit processor plus the IBM made dolphin CPU was one of the worlds smallest designs meaning its pipelines were superiour. The P3 was a decent CPU mainly due to its high Ghz.

Nah. - You are very wrong.
For one, Gekko is a 32bit not a 128bit processor.
https://en.wikipedia.org/wiki/Gekko_(microprocessor)

Bits do not correspond to performance either... Most games wouldn't have leveraged the 64bit registers anyway either as it would consume more Ram.
Bits are not a representation of performance.

The Celeron 733mhz, Again beats the 500Mhz PowerPC equivalent of the Gamecubes CPU... That ignores the fact the Gamecubes CPU is clocked lower and the Xbox's CPU has performance enhancements over that.
The P6 derived core of Intels chips typically always had a pretty decent industry leading edge for the most part.

Even Anandtech recognizes the Intel chip would be superior. https://www.anandtech.com/show/858/2


Azzanation said:

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

There are many aspects where the Gamecubes GPU is better than the Xbox's GPU.
But there are many aspects where the Xbox's GPU is better than the Gamecubes.

However... The strengths of the Xbox GPU tended to outweigh it's limitations, hence why it's games were typically a step up overall.

Azzanation said:

Xbox had superior ports, that's a given due to its X64 architecture same as PCs at the time which means porting was simply.

More false information.
The Xbox CPU is not x64, 64bit extensions weren't tacked on until Intel adopted AMD x86-x64 extensions (AKA. EM64T) with the Prescott variant of the Pentium 4.
https://en.wikipedia.org/wiki/X86-64
https://en.wikipedia.org/wiki/Pentium_4#Prescott

The Xbox CPU is x86 as it's Pentium 3 derived.
https://en.wikipedia.org/wiki/Pentium_III
https://en.wikipedia.org/wiki/Xbox_technical_specifications

Azzanation said:

However games built from the ground up with GC in mind struggled to run on the Xbox. There was an old article from Factor 5 saying the Xboxs hardware could not render Rogue Leader at a comfortable frame rate (below 30 frames) compared to the silky smooth 60 frame GC version. Unfortunately i cannot find that article anymore. So i guess thats just my word at this stage.

Because Rogue Leader leveraged the Gamecubes GPU strengths rather than the Xbox's.
If you were to make a shader heavy game that gobbled up Ram like no tomorrow... The Gamecube would also struggle.

HoloDust said:

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle

They could be including Vertex performance in that calculation.
Citing nVidia is a pretty dubious affair, because nVidia will want to fluff up numbers as much as possible.

Last edited by Pemalite - on 16 September 2018

To correct earlier assumptions, the "Gekko" CPU that powers the Gamecube is 32-Bit.  The "Flipper" GPU is 64-Bit.



Around the Network
Pemalite said:
HoloDust said:

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle

They could be including Vertex performance in that calculation.
Citing nVidia is a pretty dubious affair, because nVidia will want to fluff up numbers as much as possible.

Yeah - I remember around that time people were using some wild numbers, 400 or so GFLOPS. Which when you get down to math, it really comes to something as silly as that:

(24x27FLOPS + 8x10FLOPS)*550MHz = 400GFLOPS

And those 27 per cycle and 10 per cycle numbers are indeed from official nVidia document - or it seems so...   https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-7800-GTX-GPU-Review/Hardware-Details

Yet again, 192GFLOPS for whole GPU is from official nVidia document as well...  https://www.nvidia.com/content/PDF/tegra_white_papers/Tegra_K1_whitepaper_v1.0.pdf#page=18

 

This is the very reason why i said I would love to see math behind those numbes for GC and XBOX - because, without knowing underlying architectures it's just guesswork, and given base config (4:1:4:4@162MHz  vs 4:2:8:4@233MHz), some of those numbers look...well, quite silly.



Pemalite said:

No it doesn't. People need to stop believing this.

AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.

Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751

They are both Graphics Core Next.
Again, Flops is irrelevant.

FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.

I thought the base PS4 and Xbox One GPUs were only one GCN generation behind those of the Pro and One X, but on further investigation, turns out it's actually two. So yeah, the performance difference is probably a fair bit more than the raw FLOPS value alone would indicate (and that's before you take into account that the One X's new memory set-up completely blows the doors off those of the older models).



Pemalite said:
SammyGiireal said:

All I  can remember is the GC having the sweetest looking water in games. But I seriously doubt the GameCube could run Halo 2, Dead or Alive, Half Life 2, Forza, Ninja Gaiden, etc. The Xbox was a beast. I must mention here though that RE4 in the GC totally murders the PS2 version graphically. I bought the PS2 version for the extra content but it was a step down from the gorgeous GC version.

The Original Xbox is able to have better shadered water than the Gamecube... But the Gamecube can have better textured water.

However... To be fair, the Gamecube is technically capable of every graphics effect that the Original Xbox is capable of, it just requires additional passes or workarounds in order to achieve it... Which let's face it, generally never happened until the Wii came along anyway and developers had years more to extract from it's similar architecture.


Was the GC documentation even available in english then ? IIRC Nintendo was extremely late to the party when it came to international documentation for devs, which would have made it much easier for western devs to get the full power of the OG Xbox compared to the GC.

Edit : tried to check it out it seems false. My terrible memory at it again, sorry.

Last edited by RenCutypoison - on 17 September 2018

Pemalite said: 
Azzanation said:

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

There are many aspects where the Gamecubes GPU is better than the Xbox's GPU.
But there are many aspects where the Xbox's GPU is better than the Gamecubes.

However... The strengths of the Xbox GPU tended to outweigh it's limitations, hence why it's games were typically a step up overall. 

Well that's just the thing, it seems what ever game focused on said console will outperform the other. However I strongly disagree when people claim the Xbox to be the most powerful console of the 6th gen. In my opinion it was the GC. I believe the GC could run any Xbox game where as I see the Xbox struggling to run games built around the GC's hardware. As examples, Splinter Cell was designed around the Xbox hardware and the GC could run it (Not as good) but far from broken, where as the Xbox judging by Rogue Leader could barely do half the frame rate of the GC version, hence why there was no Xbox port. Now that's just rumours stated from Factor 5 at the time.

Xbox also had better multiplats because its design was very similar to PCs where as the GC like most consoles were alienated and were a little harder to work with, so in many cases the lead platform was Xbox.

I also strongly disagree when people say the Xbox could render the better looking water.. I find it that gen that the best looking water in games were on the GC. Games like Mario Sunshine looked absolutely amazing and the GC was actually rendering the waves, it wasn't just a texture placed ontop of another texture to make the water detail look good and mimic waves, it actually did waves. Also Wave Race Blue Storm still has some of the best wave effects iv seen apart from Black Flag and Sea of Thieves and that game was made 17 years ago.

https://www.youtube.com/watch?v=4q7qMwe3_zk

I just find the Xbox's design just wasn't as good as the GCs. I find the Xbox had bigger bottle necks when it came to things where as the GC had a perfect blend between CPU and GPU. Xbox was all about brute force and basically was a supercharged PS2 however the GC was a cleverly designed machine capable of much more with less horse power.

Also keep in mind that Nintendo was very honest with there numbers of the GC, claiming low poly figures but with using a bunch of effects where as PS2 and Xbox claimed they could do more polygons but without any effects, basically wireframe mode. 



Azzanation said:

Well that's just the thing, it seems what ever game focused on said console will outperform the other. However I strongly disagree when people claim the Xbox to be the most powerful console of the 6th gen. In my opinion it was the GC. I believe the GC could run any Xbox game where as I see the Xbox struggling to run games built around the GC's hardware.

Doom 3, Chronicles of Riddick, Morrowind...to name a few...I doubt they could've run on GC without some serious cutbacks.

Yes, GC was polygon pusher, but XBOX had more capable and modern GPU.