By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GeForce Titan GPU with GK110 Core Thread

Has anyone noticed that this isn't any stronger than it should be?

2012's strongest GPU was the HD 7970 GHz (Don't argue, it currently is).  2011's was the GTX 580.   

http://www.tomshardware.com/charts/2012-vga-gpgpu/19-Tom-s-Hardware-Index-C-Extreme,2977.html

^ You can see that the current leader is 33-44% stronger than the previous one.  It is currently 60-65% the strength of a 690.  Another 30% increase just makes sense.  This isn't special, it should just be called a GTX 780.



Around the Network
CGI-Quality said:
As I said, I'll come back in a month with my set-up + Cry3 readings and we can evaluate the results. In fact, I'll happily eat my words if I'm wrong. :)

I'll happily eat my words if I am wrong as well because right now an HD7970 takes a 30 fps performance hit from 57 fps to 27 fps with 4xMSAA which is an outrageous penalty. These deferred MSAA game engines are ruining any chances of having traditional anti-aliasing in place without needing 2-3 GPUs for crying out loud ;)

BTW, under your username there is a P with 168,000 points. What is that, and how do you guys accumulate them? 



Captain_Tom said:

Has anyone noticed that this isn't any stronger than it should be?

^ You can see that the current leader is 33-44% stronger than the previous one.  It is currently 60-65% the strength of a 690.  Another 30% increase just makes sense.  This isn't special, it should just be called a GTX 780.

I think the problem is we are still stuck on 28nm node and GTX680 was already pushing 180-190W power consumption at load. Without going back to GTX480 power consumption levels of 270W, I think 85% of the performance of the 690 is actually a very good increase on the same node. My problem is the price. GTX580 was $499 but since at least June 2012 something like a Gigabyte Windforce 1Ghz HD7970/HIS dual-fan could be purchased for $380 on Newegg. With overclocking those cards can hit 1180mhz. At those speeds an HD7970 OC is nearly 50% faster than a GTX580 at 1080P and 59% faster at 1600P (http://www.techpowerup.com/reviews/HIS/HD_7970_X_Turbo/28.html).

Let's assume for a second that the Titan can even come close to 50-59% faster than HD7970GE once the Titan is overclocked -- well NV wants to ask $899 for that, but we just paid $380 for a similar increase from HD7970 OC over 580! And I also fear they'll lock voltage control like they did with GTX600 series. People complained when HD7970 on launch drivers was just 20% faster for only $100 more than the 580. NV wants to raise the price of a single flagship GPU from $499-649 to $899. They are out of their minds! 

Maybe Jen-Hsung Huang has been spending too much time with Tim Cook, thinking that it's 'normal' that faster technology should cost more every generation, not less.  (http://www.techspot.com/news/51457-apple-adds-128gb-option-to-current-ipad-lineup-updated.html)

Perhaps NV is adopting the Apple's early adopter price premium approach. 

Soleron said:

It's incredible that five years after a $90 HD4850 we're only just now seeing equivalent value.

 

The low-end desktop dGPU market is basically dead. I've dodged GPU upgrade costs for 3 generations in a row due to AMD's bitcoin mining but as soon as that ends (which is inevitable), I'll have to face paying $500-800 for these new NV/AMD flagship GPUs. My 7970s have accumulated $670 so far, which should make the upgrade to 8970s cheap. Once luck runs out, it's going to be a sticker shock for sure. With awesome price/performance of HD4000-6000 series behind us, the era of $499-649 flagship GPUs seems to be back in full force (or well NV thinks we should happily accept $899 in this global economy. I guess I missed the memo where PC enthusiasts got 6-figure job offers in droves in Silicon Valley :)



CGI-Quality said:
zarx said:
Crysis 3 is designed to be untouchable technically for three years (until their next marquee title) according to Crytek so it's not really surprising that max settings will hobble the top cards of today. People complained that Crysis 2 didn't push the boundaries of what was was possible like the first Crysis. So they set out with C3 to "melt PCs" in their words, so it's no surprise that they have gone back to pushing the max settings to 11 and ignoring quality/performance trade offs that were made with C2.

Looks like their alpha filtering could do with some work tho. Also true MSAA on a deferred rendering engine is kinda crazy. And there is still some optimisation on the game and driver side before release.

Do you imagine an update to correct the SLI issues of the beta?


Probably, it's Nvidia sponsored title isn't it?



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

BlueFalcon said:
Captain_Tom said:

Has anyone noticed that this isn't any stronger than it should be?

^ You can see that the current leader is 33-44% stronger than the previous one.  It is currently 60-65% the strength of a 690.  Another 30% increase just makes sense.  This isn't special, it should just be called a GTX 780.

I think the problem is we are still stuck on 28nm node and GTX680 was already pushing 180-190W power consumption at load. Without going back to GTX480 power consumption levels of 270W, I think 85% of the performance of the 690 is actually a very good increase on the same node. My problem is the price. GTX580 was $499 but since at least June 2012 something like a Gigabyte Windforce 1Ghz HD7970/HIS dual-fan could be purchased for $380 on Newegg. With overclocking those cards can hit 1180mhz. At those speeds an HD7970 OC is nearly 50% faster than a GTX580 at 1080P and 59% faster at 1600P (http://www.techpowerup.com/reviews/HIS/HD_7970_X_Turbo/28.html).

Let's assume for a second that the Titan can even come close to 50-59% faster than HD7970GE once the Titan is overclocked -- well NV wants to ask $899 for that, but we just paid $380 for a similar increase from HD7970 OC over 580! And I also fear they'll lock voltage control like they did with GTX600 series. People complained when HD7970 on launch drivers was just 20% faster for only $100 more than the 580. NV wants to raise the price of a single flagship GPU from $499-649 to $899. They are out of their minds! 

Maybe Jen-Hsung Huang has been spending too much time with Tim Cook, thinking that it's 'normal' that faster technology should cost more every generation, not less.  (http://www.techspot.com/news/51457-apple-adds-128gb-option-to-current-ipad-lineup-updated.html)

Perhaps NV is adopting the Apple's early adopter price premium approach. 

Soleron said:

It's incredible that five years after a $90 HD4850 we're only just now seeing equivalent value.

 

The low-end desktop dGPU market is basically dead. I've dodged GPU upgrade costs for 3 generations in a row due to AMD's bitcoin mining but as soon as that ends (which is inevitable), I'll have to face paying $500-800 for these new NV/AMD flagship GPUs. My 7970s have accumulated $670 so far, which should make the upgrade to 8970s cheap. Once luck runs out, it's going to be a sticker shock for sure. With awesome price/performance of HD4000-6000 series behind us, the era of $499-649 flagship GPUs seems to be back in full force (or well NV thinks we should happily accept $899 in this global economy. I guess I missed the memo where PC enthusiasts got 6-figure job offers in droves in Silicon Valley :)


I don't think the 7970 was overpriced when you look at how overpriced the 580 was.   Also you do realize it will be a repeat of what the 4870 did to the 260/280 right?  The "Titan" will come out and 2 weeks later the 8970 will launch and offer close to or equal performance for $300 less.  LOL what a joke. 



Around the Network
Captain_Tom said:

Has anyone noticed that this isn't any stronger than it should be?

2012's strongest GPU was the HD 7970 GHz (Don't argue, it currently is).  2011's was the GTX 580.   

http://www.tomshardware.com/charts/2012-vga-gpgpu/19-Tom-s-Hardware-Index-C-Extreme,2977.html

^ You can see that the current leader is 33-44% stronger than the previous one.  It is currently 60-65% the strength of a 690.  Another 30% increase just makes sense.  This isn't special, it should just be called a GTX 780.

It's not a new generation. They could have made this last year if they hadn't screwed up the chip yields; it's just a giant Kepler.

In fact I maintain the yields are still screwed and they'll make 3 of these and <100 of the Quadros (which still don't have all shaders enabled).



Soleron said:
Captain_Tom said:

Has anyone noticed that this isn't any stronger than it should be?

2012's strongest GPU was the HD 7970 GHz (Don't argue, it currently is).  2011's was the GTX 580.   

http://www.tomshardware.com/charts/2012-vga-gpgpu/19-Tom-s-Hardware-Index-C-Extreme,2977.html

^ You can see that the current leader is 33-44% stronger than the previous one.  It is currently 60-65% the strength of a 690.  Another 30% increase just makes sense.  This isn't special, it should just be called a GTX 780.

It's not a new generation. They could have made this last year if they hadn't screwed up the chip yields; it's just a giant Kepler.

In fact I maintain the yields are still screwed and they'll make 3 of these and <100 of the Quadros (which still don't have all shaders enabled).

Well that's what I am saying.  It isn't a 780, but it should be.  However you make a good point.  Perhaps they still can't get their yields right and that is why this is not a 780 (Because they aren't ready).  But at the same time the 7970 GHz is making them look stupid so they want the performance crown again.  

What will be interesting is how the 8970 performs.  It looks to be 25-50% stronger than the 7970.  That would put it very close to the "Titan" for $300 less...



Captain_Tom said:
What will be interesting is how the 8970 performs.  It looks to be 25-50% stronger than the 7970.  That would put it very close to the "Titan" for $300 less...

How can HD8970 be 25-50% faster than HD7970 GE on 28nm? AMD has not made a 550mm2 die (ever?) and given their power consumption for HD7970GE was already near 230-240W. Even if with a more mature 28nm node, even if they shaved off the power consumption of a 1050mhz Tahiti XT part to just 200W, I can't see how they can get 25-50% more performance in just 50W, unless AMD wants to go over 250W of power usage.

Also, HD7970GE was released on a 6 months more mature 28nm node which allowed them to bump the clocks from 925mhz to 1050mhz. That means it already took advantage of some 28nm node maturity. I am most hesitant to believe in the 25-50% increase from HD7970GE because AMD went from 40nm 389mm2 die size of HD6970 to a 28nm HD7970 365mm2 die and that full node move down allowed them to bump performance about 45-50% but power consumption stayed roughly the same (http://www.techspot.com/review/603-best-graphics-cards/page11.html).

Based on that I don't see how they can net 25-50% more performance on the same 28nm node. Last time AMD was stuck on the same node, it was when they went from HD5870 to HD6970 and performance barely went up 15% on average. Most of the increases came at higher resolutions where HD5870 ran out of 1GB VRAM and in games that use tessellation because the geometry engines were upgraded in Cayman.

The only ways I can see how a 25-50% boost can happen is if AMD would increase die size way beyond 365mm2 to 500+ (which would be unheard of for the firm), or do a complete redesign of the Tahiti XT core by instead starting with Pitcairn as a leaner gaming chip by dropping most of the double precision compute functionality of Tahiti to reduce transistor waste. Then the starting base would be a 212mm2 chip not a 365mm2 one and they could essentially double everything inside Pitcairn to make a 420mm2 monster lean gaming chip. But given AMD's focus on HSA/HPC, I can't see them ditching double precision of HD7970 on HD8970. I think that's part of their strategy moving forward. 

With NV things are different because they have a history of making very large chips (8800GTX = 484mm2, GTX280 = 576mm2, GTX480 = 526mm2, GTX580 = 520mm2) and because GK104 is under 300mm2, they have a lot more room to increase performance. GK110 already sells in K20, K20X Tesla parts. NV can easily build a 520-550mm2 Titan card due to their experience and existing proven manufacturing of K20/X parts. Additionally, unlike AMD that already went to a 384-bit bus and 288GB/sec memory bandwidth, GK104 is starved by just 192 GB/sec of memory bandwidth and yet it's not much slower than HD7970GE. If NV widens the bus, that alone could net a 40% performance increase for them. 

Instead of being an HD4870 vs. GTX280 situation, this could end up being HD2900XT/3870 vs. 8800GTX all over again. 

Not sure if this is legit but if true, Titan could be a monster: http://www.techpowerup.com/179605/First-NVIDIA-GeForce-Titan-780-Performance-Numbers-Revealed.html



lol, looking at those Crysis 3 benchmarks...if someone was to go 120 hz tripple monitor in surround, and to utilize 120 frames per second on all 3 monitors...you'd need to go Quad SLI with a GPU like this and hope they scale...

So 500 per monitor X 3 + 900 per GPU X 4 + 1500 for rest of the computer...future is going to be expensive.



disolitude said:
lol, looking at those Crysis 3 benchmarks...if someone was to go 120 hz tripple monitor in surround, and to utilize 120 frames per second on all 3 monitors...you'd need to go Quad SLI with a GPU like this and hope they scale...

So 500 per monitor X 3 + 900 per GPU X 4 + 1500 for rest of the computer...future is going to be expensive.

I hope the final game looks like this at least:

http://www.abload.de/img/c32yhu9xeyxdi.png

http://www.abload.de/img/crysis3mpopenbeta2013znxzi.png

http://www.abload.de/img/crysis3screenshotcitynvyka.png

http://www.abload.de/img/ihh9xhk3bxctrgpz2d.png

http://www.abload.de/img/it1s5qzxpr8gi9zbl2.png

http://www.abload.de/img/i9hlfmwo4nssjmebwm.png

Crysis 3 running on Intel I5-2500k Quad core @ 3.3GHz, 8 GB RAM, Nvidia GTX 680 overclocked.

http://www.game-debate.com/news/?news=3579&game=Crysis%203&title=Crysis%203%20Nvidia%20GTX%20680%20and%20Intel%20HD%204000%20Onboard%20Graphics%20Benchmarks

SMAA seems to have both the best image quality and performance trade-off.