By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GPUs Set to Increase Performance by 570 Times by 2015

Soleron said:
NJ5 said:
...

What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic?

I don't think so.

 

It's not reasonable that each shader will increase in performance by 1.5x. The most I've ever seen for a GPU is 10-15%. Most of the performance increase is due to the die shrink. What do you mean by "cores"? GPUs don't have cores, and shaders are only a small part (30-40%) of the die area. If you mean X2 chips, well the scaling is poor and is less than 2 and a half times for 4 chips, what do you think 50 would be?

NJ5, you are correct. Most power reduction is from die shrinks, and you're lucky if you get much more than 30% reduction from a shrink. The 65nm 9800GTX used 20% less power than the 8800Ultra while being similar in design. 6 years means 3 die shrinks, so using 1/5 of the power is optimistic. 1/50 is ridiculous, especially if you also want to increase performance.

 

 

 

The usage of the word cores is a bit weird, so I assumed he meant multiplying the number of shader units by 50.

 



My Mario Kart Wii friend code: 2707-1866-0957

Around the Network

I'm a little confused, how far can GPU's go? I mean..isn't there a limit as to what they can produce? Like once we hit photo-realism what else is there left? Plus, can anyone afford to make a game that would use it to it's potential?



Conner52 said:
I'm a little confused, how far can GPU's go? I mean..isn't there a limit as to what they can produce? Like once we hit photo-realism what else is there left? Plus, can anyone afford to make a game that would use it to it's potential?

Creating more complex photo-realism? An apple on a table could be made to look very photo-realistic nowadays, but with advances in GPUs we can begin to see more complex things like bustling cities with all of the millions of different light sources, people, vehicles, etc. Greater view distances and resolutions, also.

Also, GPUs have started calculating more things, like physics, etc, and so GOUs can take more of this workload off of the CPU.



Conner52 said:
I'm a little confused, how far can GPU's go? I mean..isn't there a limit as to what they can produce? Like once we hit photo-realism what else is there left? Plus, can anyone afford to make a game that would use it to it's potential?

Lets say making photorealistic renders take about 1 day. (Rendering time was based on one artists comment, if you have better example just let me know, because I am interested in this too.)

http://www.trinity3d.com/media/next_limit/maxwell_render/maxwell_render_example9.jpg

(Its called spectral rendering, btw.)

For real time 60 fps spectral scene it would take...

1 * 24 * 60 * 60 * 60 = 5 184 000

...times more processing power than we have now. So, you might not see it and neither would your children... Unless of course there would be some breakthrough. :)

 

SamuelRSmith said:
Conner52 said:
I'm a little confused, how far can GPU's go? I mean..isn't there a limit as to what they can produce? Like once we hit photo-realism what else is there left? Plus, can anyone afford to make a game that would use it to it's potential?

Creating more complex photo-realism? An apple on a table could be made to look very photo-realistic nowadays, but with advances in GPUs we can begin to see more complex things like bustling cities with all of the millions of different light sources, people, vehicles, etc. Greater view distances and resolutions, also.

Also, GPUs have started calculating more things, like physics, etc, and so GOUs can take more of this workload off of the CPU.

 Or human head? However raster rendering != photrealism... Not even close, even though it can be impressive, but still somewhat ankward.



I think the question comes down to what he is really using for comparisons ...

If you looked at the typical GPU that a person owns today it would (probably) be in the performance range of a Geforce 6800 or Radeon X800 which is (essentially) a 5 year old GPU. If you then consider what the performance of a state of the art GPU will be like in 2015 you would probably expect a performance boost in the 100+ times range.

On top of that, there is the question of what is being used as a benchmark ...

Very few GPUs will see much of an amazing improvement in their ability to render untextured polygons without lighting because there is really very little point to rendering (dramatically) more polygons than you have pixels. In contrast, over the next 5 or so years you will probably see a massive increase in the number of ray-triangle intersections that can be done by GPUs as companies build in support for those calculations to make way for real-time raytracing.



Around the Network
NJ5 said:
Soleron said:
NJ5 said:
...

What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic?

I don't think so.

 

It's not reasonable that each shader will increase in performance by 1.5x. The most I've ever seen for a GPU is 10-15%. Most of the performance increase is due to the die shrink. What do you mean by "cores"? GPUs don't have cores, and shaders are only a small part (30-40%) of the die area. If you mean X2 chips, well the scaling is poor and is less than 2 and a half times for 4 chips, what do you think 50 would be?

NJ5, you are correct. Most power reduction is from die shrinks, and you're lucky if you get much more than 30% reduction from a shrink. The 65nm 9800GTX used 20% less power than the 8800Ultra while being similar in design. 6 years means 3 die shrinks, so using 1/5 of the power is optimistic. 1/50 is ridiculous, especially if you also want to increase performance.

 

The usage of the word cores is a bit weird, so I assumed he meant multiplying the number of shader units by 50.

 

Do you expect the general architectures of GPUs as we know them today to scale much longer? That's not the direction in which we are going, as unified shader units are becoming de-facto more and more similar to CPU cores.

I used the term cores because that's what you will have: x86 cores in a Larrabee, SPUs in a Cell, and so on. Again, you must see this in the future perspective in which a GPU is basically an array of general purpose stream processors with very little specialized silicon compared to the amount of programmable cores.

And yes, power is the main trouble with this direction, as 32-cores Larrabee prototypes (65nm) are said to require about 300W. Flexibility is obviously not necessarily conciliable with efficiency. And yet, I'm writing this on a quad-core 3-something MHz desktop computer, not on a 10MHz one.

PS: I'm not saying his numbers are accurate, just trying to read them in the light of foreseeable hardware trends so that they make at least some sense. Since I don't have access to his actual words I can't really know what _he_ meant.

 



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

HappySqurriel said:

I think the question comes down to what he is really using for comparisons ...

If you looked at the typical GPU that a person owns today it would (probably) be in the performance range of a Geforce 6800 or Radeon X800 which is (essentially) a 5 year old GPU. If you then consider what the performance of a state of the art GPU will be like in 2015 you would probably expect a performance boost in the 100+ times range.

On top of that, there is the question of what is being used as a benchmark ...

Very few GPUs will see much of an amazing improvement in their ability to render untextured polygons without lighting because there is really very little point to rendering (dramatically) more polygons than you have pixels. In contrast, over the next 5 or so years you will probably see a massive increase in the number of ray-triangle intersections that can be done by GPUs as companies build in support for those calculations to make way for real-time raytracing.

 


According to Mr. Huang, by 2015 graphics processing units will have computing power that is 570 times higher compared to performance of today’s GPUs. Meanwhile, central processing units (CPUs) will be only three times faster than today’s most powerful chips. Considering the  fact that modern graphics chips can offer about 1TFLOPs of computing power, then in 2015 they will offer whopping 570TFLOPs.

 



Second analysis in light of above quote:

Nov. 2006 - 8800GTX: 1x
Oct. 2007 9800gx2: 2.22x
Jun 2008 - GTX280: 1.8x
Jan 2009 - GTX295: 3.45x

In keeping with prediction, GT300 in Dec. 2008 should be 24.4x, so discounting die shrink gains, GT300 needs to be 3.5x faster than GT200b per shader assuming shaders scale linearly. Which, given that no one is expecting more than a 20-30% gain, is still impossible.



HappySqurriel said:

Very few GPUs will see much of an amazing improvement in their ability to render untextured polygons without lighting because there is really very little point to rendering (dramatically) more polygons than you have pixels. In contrast, over the next 5 or so years you will probably see a massive increase in the number of ray-triangle intersections that can be done by GPUs as companies build in support for those calculations to make way for real-time raytracing.


But you cant compare a 3D render with a 2-D image like that. Already in this gen you have single in-game models with a poly-count that exceed the number of pixels on screen  Forza 3's cars have 1 million polys that need tp be rendered, while the screen res (amount of visible pixels) aint more than 920k (1280x720).

Really, i can imagine cities with poly counts of billions, which would demand insane GPU rendering power (unless you use tricks where u don't calculate every poly change that isn't in view, but those technologies only go so far... i think it's even still in a primitive state if I understood the words from the Brink/Bethesda developer correctly about smart rendering techniques).



Without tools to decrease dev cost, this won't even matter.