By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Original XBOX performance

jlauro said:
The GC had decent FP from both CPU and GPU, most other only had decent from one. It could do closer to 11 GFLOPS.
It also had the most impressive matrix transformations, able to do 3 billion/sec. (PS2 and XBOX at 2 billion).

Overall, I would say the graphics on the GC were more impressive and the original, except for one thing... the biggest limitation with the gamecube in terms of graphics being more obvious was the media was only 1.5GB on the mini dvd and no internal hard drive, and Xbox could read 6.8GB on a dvd. That limited or extra storage played a factor for many games in the graphics quality...

The lack of storage space coupled with less ram did lead to lower res textures and video playback suffered in games.

 

That said.... The loading times on the GC blew away the competition even using discs for the first time on a ninty home machine



Why not check me out on youtube and help me on the way to 2k subs over at www.youtube.com/stormcloudlive

jlauro said:
The GC had decent FP from both CPU and GPU, most other only had decent from one. It could do closer to 11 GFLOPS.
It also had the most impressive matrix transformations, able to do 3 billion/sec. (PS2 and XBOX at 2 billion).

Overall, I would say the graphics on the GC were more impressive and the original, except for one thing... the biggest limitation with the gamecube in terms of graphics being more obvious was the media was only 1.5GB on the mini dvd and no internal hard drive, and Xbox could read 6.8GB on a dvd. That limited or extra storage played a factor for many games in the graphics quality...

The Mini Discs were probably not as big of a limiter as you might think, especially with good compression algorithms.
It did mean full motion video and high quality audio tended to get cut back first though.

The Gamecubes CPU is IBM PowerPC 750 derived.
A Celeron 733mhz is able to beat a PowerPC 750 chip operating at 500Mhz.
Add that the Original Xbox's CPU is a Coppermine based Celeron but with a 33% faster FSB and double the cache associativity... And the fact the Cubes' CPU is operating at 486mhz rather than 500Mhz... It's a no brainer that the Xbox's CPU has the edge... And the games show.

But the other bonus point that falls in the Xbox's favor is of course... Audio.
The Xbox leverages the impressive nVidia Soundstorm, which offloads a significant amount of CPU burden, roughly 4Gflop worth of capability alone on that front.
Not only that but it could do full 3D positional audio...
I kinda' wished nVidia kept up it's audio efforts or Aureal stuck around, they were pioneering some impressive Audio solutions, in some aspects we have gone backwards since those days.

The Xbox also had faster and more Ram, which helped significantly.



--::{PC Gaming Master Race}::--

Pemalite said:

The Mini Discs were probably not as big of a limiter as you might think, especially with good compression algorithms.
It did mean full motion video and high quality audio tended to get cut back first though.

The Gamecubes CPU is IBM PowerPC 750 derived.
A Celeron 733mhz is able to beat a PowerPC 750 chip operating at 500Mhz.
Add that the Original Xbox's CPU is a Coppermine based Celeron but with a 33% faster FSB and double the cache associativity... And the fact the Cubes' CPU is operating at 486mhz rather than 500Mhz... It's a no brainer that the Xbox's CPU has the edge... And the games show.

But the other bonus point that falls in the Xbox's favor is of course... Audio.
The Xbox leverages the impressive nVidia Soundstorm, which offloads a significant amount of CPU burden, roughly 4Gflop worth of capability alone on that front.
Not only that but it could do full 3D positional audio...
I kinda' wished nVidia kept up it's audio efforts or Aureal stuck around, they were pioneering some impressive Audio solutions, in some aspects we have gone backwards since those days.

The Xbox also had faster and more Ram, which helped significantly.

 

Pemalite said:

The Mini Discs were probably not as big of a limiter as you might think, especially with good compression algorithms.
It did mean full motion video and high quality audio tended to get cut back first though.

The Gamecubes CPU is IBM PowerPC 750 derived.
A Celeron 733mhz is able to beat a PowerPC 750 chip operating at 500Mhz.
Add that the Original Xbox's CPU is a Coppermine based Celeron but with a 33% faster FSB and double the cache associativity... And the fact the Cubes' CPU is operating at 486mhz rather than 500Mhz... It's a no brainer that the Xbox's CPU has the edge... And the games show.

But the other bonus point that falls in the Xbox's favor is of course... Audio.
The Xbox leverages the impressive nVidia Soundstorm, which offloads a significant amount of CPU burden, roughly 4Gflop worth of capability alone on that front.
Not only that but it could do full 3D positional audio...
I kinda' wished nVidia kept up it's audio efforts or Aureal stuck around, they were pioneering some impressive Audio solutions, in some aspects we have gone backwards since those days.

The Xbox also had faster and more Ram, which helped significantly.

It's a great comparison no doubt but getting back to the original OP with all of the information that you just listed above do you still agree that the benchmark performance of the XBOX is 5.8 GFLOPS while the GameCube is 9.4 GFLOPS? 



tripenfall said:

It's a great comparison no doubt but getting back to the original OP with all of the information that you just listed above do you still agree that the benchmark performance of the XBOX is 5.8 GFLOPS while the GameCube is 9.4 GFLOPS? 

Yes.
But Gflop is a theoretical number, it is often not achievable in the real world.



--::{PC Gaming Master Race}::--

I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.



HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

This article offers a formula but it's gamespot so take it with a grain of salt. 

 

https://www.gamespot.com/articles/console-specs-compared-xbox-one-x-ps4-pro-switch-a/1100-6443665/

 

Their formula is 

The basic formula for computing teraFLOPS for a GPU is:

(# of parallel GPU processing cores multiplied by peak clock speed in MHz multiplied by two) divided by 1,000,000

Let's see how we can use that formula to calculate the teraFLOPS in the Xbox One. The system's integrated graphics has 768 parallel processing cores. The GPU's peak clock speed is 853MHz. When we multiply 768 by 853 and then again by two, and then divide that number by 1,000,000, we get 1.31 teraFLOPS.

Anyone care to weigh in on this formula I honestly have no idea... 



HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

Number of cores * Number of SIMD units * ((Number of mul-add units*2) + Number of mul units) * Clockrate.

Sometimes people will add other blocks of the GPU into the equation to inflate numbers. (I.E. Geometry.)
But to keep in mind... That the "Flops" of the Xbox/Dreamcast/Gamecube/Playstation 2 will actually be different to that of the Xbox 360/Xbox One/Playstation 3/Playstation 4 as it's operating at a different precision.

It just reinforces the fact that flops is a useless metric.



--::{PC Gaming Master Race}::--

Pemalite said:

Sometimes people will add other blocks of the GPU into the equation to inflate numbers. (I.E. Geometry.)
But to keep in mind... That the "Flops" of the Xbox/Dreamcast/Gamecube/Playstation 2 will actually be different to that of the Xbox 360/Xbox One/Playstation 3/Playstation 4 as it's operating at a different precision.

It just reinforces the fact that flops is a useless metric.

It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure), but the XB/DC/GC/PS2 all had completely different GPU designs, making it a lot less useful.

John2290 said:
All I remember was that xbox was the strongest game console by a small margin and in person the gains were slightly noticeable yet there was a different style to the first party titles. Whatever the specs gain as I believe the style of the games had more of an impact, at least at the time.

I think that had as much to do with the fact that probably 90% of games from that generation were designed with the PS2 in mind, and just given bumps to resolution, texture filtering quality and anti-aliasing when they were brought over to the Xbox and Gamecube. Microsoft and Nintendo had a better understanding of what their console's strengths were, and so designed their first-party titles to take advantage of them.



Pemalite said: 
Azzanation said:

Game Cube to me always has overall better looking games. I never owned a OG Xbox but from what i saw, the Xbox lacked effects that the GameCube could dish out. From memory i believed the Cube could render 8 different effects per ploygon compared to the PS2s 2 effects per polygon. Not sure what the Xbox was doing but i was also believed it was less, around the 4 or 6 mark.

That generally stems from Nintendo's pretty talented art direction.
But from a technical perspective, Original Xbox games were the best of that console generation.

The Original Xbox also had a pretty potent CPU as well... Which meant that Physics started to become more prominent in games. (Half Life 2 for example)
And the GPU was such a big step up that there were a handful of games that operated in High Definition.

In comparison the GCs CPU blew the doors off the Pentium 3 processor. P3 was only a 32bit CPU compared to the GCs 128bit processor plus the IBM made dolphin CPU was one of the worlds smallest designs meaning its pipelines were superiour. The P3 was a decent CPU mainly due to its high Ghz. 

The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too.

Xbox had superior ports, that's a given due to its X64 architecture same as PCs at the time which means porting was simply. However games built from the ground up with GC in mind struggled to run on the Xbox. There was an old article from Factor 5 saying the Xboxs hardware could not render Rogue Leader at a comfortable frame rate (below 30 frames) compared to the silky smooth 60 frame GC version. Unfortunately i cannot find that article anymore. So i guess thats just my word at this stage. 



tripenfall said:
HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

This article offers a formula but it's gamespot so take it with a grain of salt. 

 

https://www.gamespot.com/articles/console-specs-compared-xbox-one-x-ps4-pro-switch-a/1100-6443665/

 

Their formula is 

The basic formula for computing teraFLOPS for a GPU is:

(# of parallel GPU processing cores multiplied by peak clock speed in MHz multiplied by two) divided by 1,000,000

Let's see how we can use that formula to calculate the teraFLOPS in the Xbox One. The system's integrated graphics has 768 parallel processing cores. The GPU's peak clock speed is 853MHz. When we multiply 768 by 853 and then again by two, and then divide that number by 1,000,000, we get 1.31 teraFLOPS.

Anyone care to weigh in on this formula I honestly have no idea... 

Yeah, this is usual formula that's been used for quite some time - and it is correct for more modern GPU architectures - I'd say, anything from ATI's unified shaders onward...

Pemalite said:
HoloDust said:
I would really love to see math behind those GFLOPS numbers, both for XBOX and GC, given that even today lot of folks can't agree what is actual theoretical peak for RSX in PS3.

Number of cores * Number of SIMD units * ((Number of mul-add units*2) + Number of mul units) * Clockrate.

Sometimes people will add other blocks of the GPU into the equation to inflate numbers. (I.E. Geometry.)
But to keep in mind... That the "Flops" of the Xbox/Dreamcast/Gamecube/Playstation 2 will actually be different to that of the Xbox 360/Xbox One/Playstation 3/Playstation 4 as it's operating at a different precision.

It just reinforces the fact that flops is a useless metric.

Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers.

GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how  one comes to actual FLOPS is beyond me without knowing particular architectures.

For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.

  • 24 parallel pixel-shader ALU pipelines clocked at 550 MHz
    • 5 ALU operations per pipeline, per cycle (2 vector4, 2 scalar/dual/co-issue and fog ALU, 1 texture ALU)
    • 27 floating-point operations per pipeline, per cycle
    • Floating Point Operations per a second : 192 GFLOPs
  • 8 parallel vertex pipelines
    • 2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
    • 10 FLOPS per pipeline, per cycle