By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ethomaz said:
Hummmm...

The Geforce Titan have 2880 shaders processors instead the 1536 found in the GTX 680... that's near 90% more CUDA cores.

The full GK110 chip only has 15 SMX clusters for a total of 2880 SPs, so you cannot have 3072 SPs. The K20X chip has 1 cluster disabled for a total of 2688, clocked at 732mhz with 5.2ghz GDDR5 6GB. With those specs it's already at 235W TDP. 

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last

2688 SPs @ 732mhz / (1536 SPs @ 1058mhz of GTX680) only gives us 21% more shader processing power. To get 91% more shader power  you need a K20X chip with 2688 SPs @ 1155mhz (!) or the full GK110 chip with 2880 SPs clocked at 1080mhz. 

Moving on to memory bandwidth. 5.2ghz GDDR5 over 384-bit bus only gives us 249.6 GB/sec memory bandwidth. To get a 91% increase in memory bandwidth over GTX680's 192GB/sec, you need 366.7 GB/sec memory bandwidth or GDDR5 of 7639mhz (impossible!!).

On my HD7970, overclocking the memory to GDDR5 7000mhz increases power consumption 25W alone. 

How do you expect to have a GK110 chip with 2688 SPs @ 1155mhz or 2880 SPs @ 1080mhz + GDDR5 7000mhz (at minimum) and not use up 275-300W of power when such a chip with just 2688 SPs @ 732mhz and measly 5.2ghz GDDR5 is already pushing 235W TDP?

The math doesn't add up. 

Soleron said:
>.>

 

I was spreading NV's marketing for free. They should be greatful!!  Maybe if I said CUDA cores enough times, I'd get offered to join their viral Focus Group and be sent a free GTX690. I kid, I kid. ;)