alephnull said:
Depending on whether or not a workload lends itself to data level parallelism, task levelparallelism, instruction level parallelism, or is just inherently sequential you will potentially get order of magnitude differences in performance with different architectures which have the same transistor count. It may be that the best possible performance (lets pretend this is well defined) of any concievable game will be similar between the xbox360 and the PS3. However, I don't consider a metric with this much room for error to be very useful. Much of this debate seems to ignore the possibility that the rival platforms will each have significant advantages in different types of games.
|
Yes, there is more to it than I was letting ... but when you're looking at processors which were designed to solve similar "problems" the metric I proposed becomes remarkably accurate.
Basically, if you take an nVidia and an ATI graphics card where the GPUs are manufactured using the same manufacturing process, have similar energy requirements and have a similar die-size/transistor count the graphics cards will perform in a very similar range.
Certainly, even the most powerful GPU in the world would be unable to run productivity software at an acceptable level and most CPUs could not produce graphics at the level of a several year old mid-line GPU, but you would rarely be comparing these two processors against eachother to decide which is "more powerful"