| Star Scream said: They either found the way to beat Moore's Law, or this is Ballacks... |
Can be both, actually.
Moore's "law" is simply a market trend observation about the increase of the number of transistors per chip. GPUs have been "beating Moore's law" for years, because that's what the market wanted.
Anyway, nVidia beating on the "right chip for the right task" key and predicting such incredible progress in GPUs processing power is just their way of putting the strategy they're forced into under a good light.
Intel is going towards a highly parallel cluster of x86 cores instead of dedicated silicon for their next-gen Larrabee chips, and ultimately towards GPU/CPU integration. ATI was aquired by AMD and is likely to follow this path too. nVidia is the odd man out... their expertise in CPUs is not at the same level, so they have to insist that parallelization of dedicated hardware is a better idea than muti-core CPUs doing the rendering in a completely programmable way.
All the TFLOPs they talk about might be at the same time very real in 5 years, as graphics is very easy to parallelize, and yet hardly as relevant as the flexibility of other solutions. Frankly I'm glad that the trend is to go back to software rendering, it makes so much more sense than the endless spiral of software/hardware features we've been through. In the world of Larrabee and similar GPUs if MS wants to introduce a new feature in Direct3D 15 then no new dedicated hardware is needed, just new drivers that implement that feature on the general purpose cores.








