Soleron said:
It's not reasonable that each shader will increase in performance by 1.5x. The most I've ever seen for a GPU is 10-15%. Most of the performance increase is due to the die shrink. What do you mean by "cores"? GPUs don't have cores, and shaders are only a small part (30-40%) of the die area. If you mean X2 chips, well the scaling is poor and is less than 2 and a half times for 4 chips, what do you think 50 would be? NJ5, you are correct. Most power reduction is from die shrinks, and you're lucky if you get much more than 30% reduction from a shrink. The 65nm 9800GTX used 20% less power than the 8800Ultra while being similar in design. 6 years means 3 die shrinks, so using 1/5 of the power is optimistic. 1/50 is ridiculous, especially if you also want to increase performance.
|
The usage of the word cores is a bit weird, so I assumed he meant multiplying the number of shader units by 50.
My Mario Kart Wii friend code: 2707-1866-0957







