By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GPUs Set to Increase Performance by 570 Times by 2015

Star Scream said:
They either found the way to beat Moore's Law, or this is Ballacks...

Can be both, actually.

Moore's "law" is simply a market trend observation about the increase of the number of transistors per chip. GPUs have been "beating Moore's law" for years, because that's what the market wanted.

Anyway, nVidia beating on the "right chip for the right task" key and predicting such incredible progress in GPUs processing power is just their way of putting the strategy they're forced into under a good light.

Intel is going towards a highly parallel cluster of x86 cores instead of dedicated silicon for their next-gen Larrabee chips, and ultimately towards GPU/CPU integration. ATI was aquired by AMD and is likely to follow this path too. nVidia is the odd man out... their expertise in CPUs is not at the same level, so they have to insist that parallelization of dedicated hardware is a better idea than muti-core CPUs doing the rendering in a completely programmable way.

All the TFLOPs they talk about might be at the same time very real in 5 years, as graphics is very easy to parallelize, and yet hardly as relevant as the flexibility of other solutions. Frankly I'm glad that the trend is to go back to software rendering, it makes so much more sense than the endless spiral of software/hardware features we've been through. In the world of Larrabee and similar GPUs if MS wants to introduce a new feature in Direct3D 15 then no new dedicated hardware is needed, just new drivers that implement that feature on the general purpose cores.



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

Around the Network

GPUs can still go a long ways.

CPUs however, are quickly approaching a brick wall.



570x in 6 years = 2.9x performance per year

Let's look at Nvidia's recent history.

G80, Nov 2006 = 1x performance (8800GTX)
G92, March 2008 = 1.6x performance (9800GX2)
GT200, June 2008 = 1.4x performance (GTX280)
GT200b, January 2009 = 2.4x performance (GTX295)

So, in order to make up the scaling deficit against that so far, GT300 expected in December would need 24.4x the performance of the 8800GTX. So it would have to be about 10x faster than the GTX295, and transistor scaling (55nm -> 40nm) will give a max of 2x of that. So Nvidia, transistor for transistor, has to improve performance by 5x with their next architecture.

I'm sorry, it's not happening. No computer architecture, CPU or GPU, in history has ever done that.



It's impossible to increase performance by 570 times by 2015, but to Hirais's defence he meant it would be accomplished by a combo of CPU and GPU. Still it shoudln't be more than rufly 50 times the performance of today, so he overexaggerated by a factor of 10.



Soleron said:
570x in 6 years = 2.9x performance per year

Let's look at Nvidia's recent history.

G80, Nov 2006 = 1x performance (8800GTX)
G92, March 2008 = 1.6x performance (9800GX2)
GT200, June 2008 = 1.4x performance (GTX280)
GT200b, January 2009 = 2.4x performance (GTX295)

So, in order to make up the scaling deficit against that so far, GT300 expected in December would need 24.4x the performance of the 8800GTX. So it would have to be about 10x faster than the GTX295, and transistor scaling (55nm -> 40nm) will give a max of 2x of that. So Nvidia, transistor for transistor, has to improve performance by 5x with their next architecture.

I'm sorry, it's not happening. No computer architecture, CPU or GPU, in history has ever done that.

Look at the picture and at the numbers on it.

That fantasy math would mean that CPUs will increas in raw FLOPS performance of a factor of 1.2 each year => 1.2^6 gives about a 3x factor in 6 years.

For GPUs the half-hidden line is something like 50*1.5^6=570x i.e. each core must become 1.5 times more performant each year, and by 2015 he expects that GPUs will use 50 cores.

The 1.5 factor is completely reasonable. The 50 cores number is, as well. Being able to squeeze all the TFLOPS out of a 50 cores C/GPU seems optimistic, though...



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

Around the Network

Final Fantasy Advent Children the game in real time confirmed? The only problem with this is CGI Quality games would cost like 500mill+ to make.



global warming to arrive 570x earlier than predicted!



Doubt is not a pleasant condition, but certainty is absurd.

owner of : atari 2600, commodore 64, NES,gameboy,atari lynx, genesis, saturn,neogeo,DC,PS2,GC,X360, Wii

5 THINGS I'd like to see before i knock out:

a. a AAA 3D sonic title

b. a nintendo developed game that has a "M rating"

c. redesgined PS controller

d. SEGA back in the console business

e. M$ out of the OS business

Maybe they are implying that GPUs in 2015 will be 570 times faster in peak performance than CPUs today.



WereKitten said:
Soleron said:
570x in 6 years = 2.9x performance per year

Let's look at Nvidia's recent history.

G80, Nov 2006 = 1x performance (8800GTX)
G92, March 2008 = 1.6x performance (9800GX2)
GT200, June 2008 = 1.4x performance (GTX280)
GT200b, January 2009 = 2.4x performance (GTX295)

So, in order to make up the scaling deficit against that so far, GT300 expected in December would need 24.4x the performance of the 8800GTX. So it would have to be about 10x faster than the GTX295, and transistor scaling (55nm -> 40nm) will give a max of 2x of that. So Nvidia, transistor for transistor, has to improve performance by 5x with their next architecture.

I'm sorry, it's not happening. No computer architecture, CPU or GPU, in history has ever done that.

Look at the picture and at the numbers on it.

That fantasy math would mean that CPUs will increas in raw FLOPS performance of a factor of 1.2 each year => 1.2^6 gives about a 3x factor in 6 years.

For GPUs the half-hidden line is something like 50*1.5^6=570x i.e. each core must become 1.5 times more performant each year, and by 2015 he expects that GPUs will use 50 cores.

The 1.5 factor is completely reasonable. The 50 cores number is, as well. Being able to squeeze all the TFLOPS out of a 50 cores C/GPU seems optimistic, though...

What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic?

I don't think so.

 



My Mario Kart Wii friend code: 2707-1866-0957

NJ5 said:
...

What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic?

I don't think so.

 

It's not reasonable that each shader will increase in performance by 1.5x. The most I've ever seen for a GPU is 10-15%. Most of the performance increase is due to the die shrink. What do you mean by "cores"? GPUs don't have cores, and shaders are only a small part (30-40%) of the die area. If you mean X2 chips, well the scaling is poor and is less than 2 and a half times for 4 chips, what do you think 50 would be?

NJ5, you are correct. Most power reduction is from die shrinks, and you're lucky if you get much more than 30% reduction from a shrink. The 65nm 9800GTX used 20% less power than the 8800Ultra while being similar in design. 6 years means 3 die shrinks, so using 1/5 of the power is optimistic. 1/50 is ridiculous, especially if you also want to increase performance.