OlfinBedwere said: It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure) |
No it doesn't. People need to stop believing this.
AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.
Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751
They are both Graphics Core Next.
Again, Flops is irrelevant.
FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.
OlfinBedwere said: I think that had as much to do with the fact that probably 90% of games from that generation were designed with the PS2 in mind, and just given bumps to resolution, texture filtering quality and anti-aliasing when they were brought over to the Xbox and Gamecube. Microsoft and Nintendo had a better understanding of what their console's strengths were, and so designed their first-party titles to take advantage of them. |
Ports over from the PC to the Xbox did shine on the Xbox though.
Azzanation said: In comparison the GCs CPU blew the doors off the Pentium 3 processor. P3 was only a 32bit CPU compared to the GCs 128bit processor plus the IBM made dolphin CPU was one of the worlds smallest designs meaning its pipelines were superiour. The P3 was a decent CPU mainly due to its high Ghz. |
Nah. - You are very wrong.
For one, Gekko is a 32bit not a 128bit processor.
https://en.wikipedia.org/wiki/Gekko_(microprocessor)
Bits do not correspond to performance either... Most games wouldn't have leveraged the 64bit registers anyway either as it would consume more Ram.
Bits are not a representation of performance.
The Celeron 733mhz, Again beats the 500Mhz PowerPC equivalent of the Gamecubes CPU... That ignores the fact the Gamecubes CPU is clocked lower and the Xbox's CPU has performance enhancements over that.
The P6 derived core of Intels chips typically always had a pretty decent industry leading edge for the most part.
Even Anandtech recognizes the Intel chip would be superior. https://www.anandtech.com/show/858/2
Azzanation said: The GPUs i am not too sure about, the GCs GPU could render more effects per polygon so i wouldnt be suprised if the GCs GPU was technically better too. |
There are many aspects where the Gamecubes GPU is better than the Xbox's GPU.
But there are many aspects where the Xbox's GPU is better than the Gamecubes.
However... The strengths of the Xbox GPU tended to outweigh it's limitations, hence why it's games were typically a step up overall.
Azzanation said: Xbox had superior ports, that's a given due to its X64 architecture same as PCs at the time which means porting was simply. |
More false information.
The Xbox CPU is not x64, 64bit extensions weren't tacked on until Intel adopted AMD x86-x64 extensions (AKA. EM64T) with the Prescott variant of the Pentium 4.
https://en.wikipedia.org/wiki/X86-64
https://en.wikipedia.org/wiki/Pentium_4#Prescott
The Xbox CPU is x86 as it's Pentium 3 derived.
https://en.wikipedia.org/wiki/Pentium_III
https://en.wikipedia.org/wiki/Xbox_technical_specifications
Azzanation said: However games built from the ground up with GC in mind struggled to run on the Xbox. There was an old article from Factor 5 saying the Xboxs hardware could not render Rogue Leader at a comfortable frame rate (below 30 frames) compared to the silky smooth 60 frame GC version. Unfortunately i cannot find that article anymore. So i guess thats just my word at this stage. |
Because Rogue Leader leveraged the Gamecubes GPU strengths rather than the Xbox's.
If you were to make a shader heavy game that gobbled up Ram like no tomorrow... The Gamecube would also struggle.
HoloDust said: Yeah, I know about that one too...or variation of it...yet I have no idea how anyone come up with GC and XBOX numbers. GC has 4:1:4:4 @162MHz, while XBOX has 4:2:8:4 core config @233MHz (pixel:vertex:TMU:ROP)...how one comes to actual FLOPS is beyond me without knowing particular architectures. For example, I still can't figure out how to convert PS3's RSX GPUs specs into FLOPS (24:8:24:8 part), since, to me at least, something seems to be off with quoted numbers, as if they are conflicting each other. For example, current GFLOPS at wiki are 192 for pixel shaders (I remember this being changed numerous time), and this is quoted from K1 whitepaper, which states 192GFLOPS for whole PS3's GPU.
|
They could be including Vertex performance in that calculation.
Citing nVidia is a pretty dubious affair, because nVidia will want to fluff up numbers as much as possible.
--::{PC Gaming Master Race}::--