Garcian Smith said:
I'll take a shot at this one. The PS3's Cell processor features a single main PowerPC-based core clocked at 3.2 GHz and seven vectorized SPUs. While pretty powerful technology in 2006, however, the Cell cannot hold a candle to today's PC CPUs for gaming. In synthetic benchmarks and without its SPUs, the Cell ranks at about the speed of a low-end (1.6 GHz) PowerPC G5 - a CPU based off of circa-2002 technology. This is the most useful metric in comparing the Cell to modern processors, because despite theoretically making the Cell an eight-core CPU, the SPUs are (in layman's terms) highly crippled: among other things, they require a vectorized instruction set and have no local cache. This means that, not only do programs need to be developed specifically around the Cell's architecture, but the program itself also needs to be suited exactly to the Cell's architecture to take full advantage of it. Therefore, while the Cell has very high theoretical processing muscle, this muscle only really shows in synthetic benchmarks and doesn't really have much practical application in games without using the SPUs in a highly unoptimized way. This is why, despite having greater theoretical processing power than, say, a good Core 2 Duo, games optimized for the C2D can feature more processor-intensive tricks and effects than games optimized for the Cell. And furthermore, this is why the Cell can't even touch a modern high-end LGA1366 CPU for most applications. (If it could, then people would just use the much cheaper, 4-year-old technology instead.) And while we're at it, let's talk about the PS3's graphical architecture for a sec. The PS3 runs an NVidia GPU that's somewhere close to a 7800 GTX in performance, albeit with a slightly higher clockspeed and crippled VRAM. The 7800 GTX was a high-end card in 2005, but today it's slow as hell. Here's a rough comparison between it and modern cards (synthetic benchmark results, but I can't find a more direct comparison). As you can see, the 7800 GTX is (theoretically) beaten in performance today by a $60 Radeon 4670. It's this weakness in graphical hardware that has led to many PS3-exclusive developers ignoring the GPU and using the Cell's SPUs to emulate a GPU instead - a workable, but imperfect, solution that still leads to most graphically intensive games being capped at 30 FPS at 1280x720. Finally, we get to God of War 3. The God of War series has always used certain cheap tricks to make the game look better than it really should; specifically, locking the camera so that the game never renders too much at once, making the environments as non-interactive as possible, and pre-scripting nearly every possible interaction via QTEs and canned animations. Compare this to the first Crysis, where environments are heavily interactive: Grass and foliage sway as you walk through it, buildings crumble under artillery fire, hell, pretty much anything short of the ground itself can be destroyed or interacted with in some fashion. By comparison, God of War 3 doesn't even have a physics engine. So while GoW3 may look impressive (at only 1280x720 with morphological AA, the benefits of which over multisampling AA are questionable), graphics aside it's basically a circa-2003 PS2 game with a shiny coat of paint. The lack of anything else for the PS3's hardware to do meant that the devs could free up pretty much everything else for graphical processing, and furthermore the complete lack of interactivity, combined with the game's linearity, meant that the devs could control exactly what was onscreen at any given time, a luxury that few other games can share. This is why GoW3 is perhaps the best looking game on the PS3: The devs made every sacrifice possible in other areas of the game in order to increase the eye candy factor. I'm sure some of the other PC gurus can add to this explanation, too. |
o) false, that benchmark was made at the end of 2006, running linux on a ps3. The guest os on the ps3 is virtualized and at that time linux wasn't optimized for running on cell processors, it ran in powerpc compatible mode; so that benchmark means nothing.
o) the RSX uses a custom bus, the NVIDIA 7800 and 7900 cards were crippled by low speed VRAM and low bandwidth, while this could be not the case with the RSX.
o) original research?