By using this site, you agree to our Privacy Policy and our Terms of Use. Close
WereKitten said:
Soleron said:
570x in 6 years = 2.9x performance per year

Let's look at Nvidia's recent history.

G80, Nov 2006 = 1x performance (8800GTX)
G92, March 2008 = 1.6x performance (9800GX2)
GT200, June 2008 = 1.4x performance (GTX280)
GT200b, January 2009 = 2.4x performance (GTX295)

So, in order to make up the scaling deficit against that so far, GT300 expected in December would need 24.4x the performance of the 8800GTX. So it would have to be about 10x faster than the GTX295, and transistor scaling (55nm -> 40nm) will give a max of 2x of that. So Nvidia, transistor for transistor, has to improve performance by 5x with their next architecture.

I'm sorry, it's not happening. No computer architecture, CPU or GPU, in history has ever done that.

Look at the picture and at the numbers on it.

That fantasy math would mean that CPUs will increas in raw FLOPS performance of a factor of 1.2 each year => 1.2^6 gives about a 3x factor in 6 years.

For GPUs the half-hidden line is something like 50*1.5^6=570x i.e. each core must become 1.5 times more performant each year, and by 2015 he expects that GPUs will use 50 cores.

The 1.5 factor is completely reasonable. The 50 cores number is, as well. Being able to squeeze all the TFLOPS out of a 50 cores C/GPU seems optimistic, though...

What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic?

I don't think so.

 



My Mario Kart Wii friend code: 2707-1866-0957