Forums - PC Discussion - Nvidia Ampere announced, 3090, 3080, 3070 coming later this year

Interesting, Buildzoid thinks it will carry HBM2 and be slightly faster than the RTX 3080. Hope he is right, 6900 bandwidth will suck otherwise.



Around the Network
eva01beserk said:
Norion said:

If someone gets a 2080ti for a similar price as a 3070 won't it be used? I think that would make a brand new 3070 the better thing to get.

I for one would never buy anything used. But it's not for me. The second hand marked is pretty big specially after the new cards come out. Its why they are not releasing the weaker 3060 or 3050,  cuz the old 2080 and 2070 would be selling at that lower price and probable performance. plus like I mentioned. If your rig is not ready with pcie4 or like 32gb of ram, then a 2080ti would probably perform better than a 3070.

If someone finds a 2080ti going for 350-400 I could see that being a good option. However the people who are still buying them at 800-1200 need to do more research before buying things lol.



Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.



Official member of VGC's Nintendo family, approved by the one and only RolStoppable. I feel honored.

OdinHades said:
Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.

Have a feeling they will be gimped somewhat, the power consumption is way higher than last gen.



OdinHades said:
Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.

Desktop version of the 3070 is over 220watts of power draw.
How in the hell will such a thing fit inside a laptop? Maybe there is a cutdown 3060 they call the 3070 in laptops, that runs lower clocks or such.

However a full 3070, running its normal desktop speeds.... that isnt going in a laptop.



Around the Network
JRPGfan said:
OdinHades said:
Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.

Desktop version of the 3070 is over 220watts of power draw.
How in the hell will such a thing fit inside a laptop? Maybe there is a cutdown 3060 they call the 3070 in laptops, that runs lower clocks or such.

However a full 3070, running its normal desktop speeds.... that isnt going in a laptop.

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.



--::{PC Gaming Master Race}::--

I'm quite certain they will figure something out until next year. Wouldn't mind too much if it has its drawbacks, but my GTX 1070 is getting somewhat slow. It's still fine and all, especially since I'm only using Full HD, but meh. Its age is showing.

Won't get a desktop since I'm travelling a lot.



Official member of VGC's Nintendo family, approved by the one and only RolStoppable. I feel honored.

RIP recent 2080Ti buyers. I'll take that for 399$ now. It's like buying stock in February this year.



Pemalite said:

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.

The memory isn't particularly fast on the 3070 right out of the gate so you don't really save any Watts there. Also making a business plan on the assumption that "there will be enough binnable chips" is a recipe for failure.

If you need to downclock a 3070 chip so much for wattage, you better start with the much cheaper 3060 chip right from the start.



drkohler said:
Pemalite said:

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.

The memory isn't particularly fast on the 3070 right out of the gate so you don't really save any Watts there. Also making a business plan on the assumption that "there will be enough binnable chips" is a recipe for failure.

If you need to downclock a 3070 chip so much for wattage, you better start with the much cheaper 3060 chip right from the start.

GPU's have a clockspeed+voltage relationship efficiency curve.

If you push clockrates out, then you need orders-of-magnitude more voltage... Vega is a prime example of this, Vega 64 was actually an extremely efficient GPU architecture, especially when undervolted and underclocked and thus could find itself in integrated graphics...

But push voltage and clocks out and it's a power hungry monster.


Same thing with Polaris.
It started life out with the Radeon RX 480 at a modest 150w... But increased to 185w with the RX 580. You *might* have gained a couple of fps points, AMD decided to push clockspeeds from 1120Mhz to 1257Mhz, but needed to increase voltages to maintain yields and thus cost an additional 35w.

And I think we are seeing that same tactic with ampere, obviously I cannot verify as I don't have an ampere based GPU, so I am not 100% sure what it's voltage/clock/efficiency curve looks like, only speculating.

Notebooks obviously run at lower TDP's, so an adjustment to clockrates and voltages is always one of the first things to happen. - But it's a balancing act, making the chip smaller but at a higher clockrate doesn't mean it will use less energy or end up faster and cheaper to produce than a larger chip at a lower clock.

The desktop RTX 2080 Super is a 3072@1650Mhz core design fed by 8GB GDDR6@496GB/s.

The mobile variant of the RTX 2080 Super is also a 3072 core design, but clockrates are lowered to 1365Mhz or a reduction of 20% with 8GB GDDR6@448GB/s a reduction of 10%.

The mobile variant is 150w, desktop is 250w.

nVidia managed to shave 100w or 66% more efficient by lowering core clocks by 20%, memory clocks by 10% with an accompanying reduction in voltage.

Yes we could argue nVidia might have been better off just taking a smaller GPU like the vanilla RTX 2070 which on the desktop is a 2304@1410Mhz core design with 8GB of GDDR6 @ 448GB/s...
And yet the desktop RTX 2070 despite having the same memory setup as the mobile RTX 2080 Super, hits around the same performance level but with reduced core counts and still works out to be 175W TDP.

There are other aspects we need to consider as well, functional units can often be tied to CUDA core counts like RT cores or Polymorph engines, which can have some consequences as well.

Either way... The point I am making is that it's not as simple as "Take a small chip and be done with". - Lots of considerations go into it and lots of extensive testing as well.
nVidia will have the best understanding of it's efficiency curves with it's various GPU architectures and would have done the necessary profiling to find the best bang for buck.

I probably waffled on longer than I originally intended to here... Apologies.



--::{PC Gaming Master Race}::--