Forums - PC Discussion - Nvidia Ampere announced, 3090, 3080, 3070 coming later this year

Cyran said:

In the question and answer they did on reddit on 3000 series they mention that DLSS 2.1 added support for VR. This is actually the location where it got me most excited if it taken advantage of and we see higher resolution VR head set.

This is my personal taste so other people might vary but for me when it comes to what I would like to see in different categories

E-sports - High refresh rate, resolution less important

Single player experiences - 4k, HDR and other eye candy like ray tracing etc but most the time am fine with 60hz on the refresh rate

VR - Greater then 4k and High refresh rates and I would give up some eye candy for those 2 things because you basically looking at a screen though a magnified glass so not noticing pixels require a very high resolution and because of the nature of VR not having a min of 90hz become a issue. Higher then 90 would be better.

If DLSS allow that high resolution with a high refresh rate and still get some of the eye candy that would be awesome for vr

The problem is the applications have to natively support DLSS. To fix this issue Nvidia actually wanted to introduce DLSS 3, but they haven't done so yet and considering how quiet they've been about it I doubt we'll see it anytime soon.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Around the Network
CGI-Quality said:

I'll also explain the MASSIVE jump in CUDAs. NVIDIA was looking to greatly improve the Ampere SM (streaming multiprocessors) over Turing. This is in FP32 (or single precision floating-point format/operations). It is also where the theoretical peak (teraflop count) is measured.

One new datapath includes 16 FP32 CUDAs capable of 16 FP32 operations per clock. The other? 16 FP32 CUDAs and 16 INT32 (an immutable value, so it can't be changed). The result of this new design, each Ampere partition can execute either 32 FP32 operations per clock or 16 FP32 and 16 INT32 operations per clock (however you choose to split it up). When combined, the four partitions can achieve 128 single precision floating-point operations per clock, which DOUBLES the FP32 rate of the Turing streaming multiprocessor (or 64 FP32 and 64 INT32 operations per clock). In less scientific terms, Ampere's SM has 128 CUDAs vs Turing's 64. This is..... a rather big deal!

Ultimately, when you double the processing speed (and double the data paths as a necessity to that), it helps many more things on the card.

I feel more like it's a cop-out or a bad compromise. Turing got it right by having dedicated paths for INT and FP loads. Ampere is basically just a cheap way to increase FP cores without sacrificing too much space to INT cores. That leads to less efficient cores. For example if you take the worst case scenario of having always loads of 64FP32 and 64INT32 on every SM you'd have the exact same performance as Turing per cycle. Basically the only reason why we see big performance improvements at all is that games have generally higher loads of FP32 than INT32 (and of course the increased clocks and SM count).

I'm very interested how they'll improve that with Hopper.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

vivster said:
Cyran said:

In the question and answer they did on reddit on 3000 series they mention that DLSS 2.1 added support for VR. This is actually the location where it got me most excited if it taken advantage of and we see higher resolution VR head set.

This is my personal taste so other people might vary but for me when it comes to what I would like to see in different categories

E-sports - High refresh rate, resolution less important

Single player experiences - 4k, HDR and other eye candy like ray tracing etc but most the time am fine with 60hz on the refresh rate

VR - Greater then 4k and High refresh rates and I would give up some eye candy for those 2 things because you basically looking at a screen though a magnified glass so not noticing pixels require a very high resolution and because of the nature of VR not having a min of 90hz become a issue. Higher then 90 would be better.

If DLSS allow that high resolution with a high refresh rate and still get some of the eye candy that would be awesome for vr

The problem is the applications have to natively support DLSS. To fix this issue Nvidia actually wanted to introduce DLSS 3, but they haven't done so yet and considering how quiet they've been about it I doubt we'll see it anytime soon.

I agree but at least it path forward where at some point we could see some truly great leaps in VR because if we being truthfully without some thing like DLSS we not going to have the GPU power to hit the idea resolution+refresh rates targets in VR for multiple more generations of GPU.



vivster said:
CGI-Quality said:

I'll also explain the MASSIVE jump in CUDAs. NVIDIA was looking to greatly improve the Ampere SM (streaming multiprocessors) over Turing. This is in FP32 (or single precision floating-point format/operations). It is also where the theoretical peak (teraflop count) is measured.

One new datapath includes 16 FP32 CUDAs capable of 16 FP32 operations per clock. The other? 16 FP32 CUDAs and 16 INT32 (an immutable value, so it can't be changed). The result of this new design, each Ampere partition can execute either 32 FP32 operations per clock or 16 FP32 and 16 INT32 operations per clock (however you choose to split it up). When combined, the four partitions can achieve 128 single precision floating-point operations per clock, which DOUBLES the FP32 rate of the Turing streaming multiprocessor (or 64 FP32 and 64 INT32 operations per clock). In less scientific terms, Ampere's SM has 128 CUDAs vs Turing's 64. This is..... a rather big deal!

Ultimately, when you double the processing speed (and double the data paths as a necessity to that), it helps many more things on the card.

I feel more like it's a cop-out or a bad compromise. Turing got it right by having dedicated paths for INT and FP loads. Ampere is basically just a cheap way to increase FP cores without sacrificing too much space to INT cores. That leads to less efficient cores. For example if you take the worst case scenario of having always loads of 64FP32 and 64INT32 on every SM you'd have the exact same performance as Turing per cycle. Basically the only reason why we see big performance improvements at all is that games have generally higher loads of FP32 than INT32 (and of course the increased clocks and SM count).

I'm very interested how they'll improve that with Hopper.

Turing used tricks too, though. They always do. The end result is what they chase, and by increasing the single precision float/second, they can double on nearly everything else (including performance).

Besides, the area where I'll be able to test efficiency will be in my pipelines/workloads (rendering + pro). Of course, until (and if/when) they unveil a TITAN, I'm stuck without stuff like TCC, but with the same amount of VRAM (albeit faster and with far better bandwidth/more CUDAs), pro/render work will be easy breezy with a pair of 3090s.

Last edited by CGI-Quality - on 04 September 2020

                                                                                                             

As a 1080ti owner the 3080 does tempt me but I'll probably wait till the 4000 series before upgrading and upgrade my CPU as well. After the lackluster 2000 series it feels nice to see a real leap again for prices better than last time.



Around the Network

3080 is 700$ in the US, and over 1600$ here in Israel :(



Norion said:
As a 1080ti owner the 3080 does tempt me but I'll probably wait till the 4000 series before upgrading and upgrade my CPU as well. After the lackluster 2000 series it feels nice to see a real leap again for prices better than last time.

Now that you mention that cpu upgrade. It makes me doubt the 3070 even more. Since it has les memory bandwidth and les memory total than the 2080ti. if your not upgrading from a pc that already has pci4 support, people are not get more performance than a 2080ti. It might be better for them to jumt to it since it will probably cost as much a 3070. Or you can have a lot of system ram to compensate, but im thinking that for thouse aiming at it, they probbly dont have it.

Granted if you have the means go for anything, but thouse starting with an old build you guys are probably better  off with a 2080ti, or your gona be spending more than just $500 for a 3070 alone. 



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Will have to wait for more details, but it's seeming very likely that I'll get an RTX 3060 once it's announced and comes out. It ought to be a huge upgrade from my current GTX 770 I got back in 2014... That said, I'll probably wait until next summer and get a really fast SSD then, and there's probably a non-zero chance it would make sense to wait until roughly then before upgrading my graphics card as well. That said, I kind of doubt Nvidia is going to release anything more suitable by then, and it doesn't seem like AMD could possibly have a reasonable answer to Ampere so AMD might also be out for this round.



Zkuq said:
Will have to wait for more details, but it's seeming very likely that I'll get an RTX 3060 once it's announced and comes out. It ought to be a huge upgrade from my current GTX 770 I got back in 2014... That said, I'll probably wait until next summer and get a really fast SSD then, and there's probably a non-zero chance it would make sense to wait until roughly then before upgrading my graphics card as well. That said, I kind of doubt Nvidia is going to release anything more suitable by then, and it doesn't seem like AMD could possibly have a reasonable answer to Ampere so AMD might also be out for this round.

https://mobile.twitter.com/RedGamingTech/status/1301884380747624449



Random_Matt said:
Zkuq said:
Will have to wait for more details, but it's seeming very likely that I'll get an RTX 3060 once it's announced and comes out. It ought to be a huge upgrade from my current GTX 770 I got back in 2014... That said, I'll probably wait until next summer and get a really fast SSD then, and there's probably a non-zero chance it would make sense to wait until roughly then before upgrading my graphics card as well. That said, I kind of doubt Nvidia is going to release anything more suitable by then, and it doesn't seem like AMD could possibly have a reasonable answer to Ampere so AMD might also be out for this round.

https://mobile.twitter.com/RedGamingTech/status/1301884380747624449

I dont know why people are doubting AMD so much. Specially since the leaks descrived Ampere's performance almost perfectly many months ago. The only thing that was wrong was the pricing of each card was $100 lower.

If the leaks hold true for AMD as well they will be head to head with the 3080.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.