NJ5 said:
What about power consumption and size? Certainly each core won't use 50 times less power in 6 years (especially if you keep increasing its performance by 1.5 each year), if you put 50 cores there you've got a big, hot, power sucking beast. Is that realistic? I don't think so.
|
It's not reasonable that each shader will increase in performance by 1.5x. The most I've ever seen for a GPU is 10-15%. Most of the performance increase is due to the die shrink. What do you mean by "cores"? GPUs don't have cores, and shaders are only a small part (30-40%) of the die area. If you mean X2 chips, well the scaling is poor and is less than 2 and a half times for 4 chips, what do you think 50 would be?
NJ5, you are correct. Most power reduction is from die shrinks, and you're lucky if you get much more than 30% reduction from a shrink. The 65nm 9800GTX used 20% less power than the 8800Ultra while being similar in design. 6 years means 3 die shrinks, so using 1/5 of the power is optimistic. 1/50 is ridiculous, especially if you also want to increase performance.







