By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Adding to your argument this might come in handy

This a testimonial from EX - Crytek Developer

" ... A quick look back at PS1 vs Saturn vs N64, Xbox vs Gamecube vs PS2, Xbox 360 vs PS3, and Xbox One vs PS4 shows countless examples of cross-platform games that should have run faster on one particular system when looking at the specs alone, but the benchmarks proved otherwise..."

"...We’re now in an era where major consoles are sharing the same underlying architecture, and some main system components comprise the same chips from the same manufacturer — some even the same model numbers — differing slightly by clock speeds, implementation, firmwares, and APIs. If we’ve not seen a slew of reliable and predictable FPS differences between previous machines so far, we’re certainly not going to suddenly see them moving forward. The gap is closing, not widening..."

"...

So, which computer would give the highest FPS in a game: one with a 2.0 teraflop processor in its GPU, or one with a 2.2 teraflop processor in its GPU? Now we need to look at different clock speeds, bus bandwidth, cache sizes, register counts, main and video RAM sizes, access latencies, core counts, northbridge implementation, firmwares, drivers, operating systems, graphics APIs, shader languages, compiler optimisations, overall engine architecture, data formats, and hundreds, maybe thousands of other factors. Each game team will utilise and tune the engine differently, constantly profiling and optimising based on conditions.

A 10% increase in any one of those individual stats isn’t going to give you a 10% increase in performance. The machine as a whole would need to be slightly increased to see an overall benefit, and you would only be able to determine the final gain by profiling each game, and even then it won’t scale linearly. If this was a consistent and predictable thing to calculate based on raw box stats, we wouldn’t need Digital Foundry..."