By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - Nvidia Ampere announced, 3090, 3080, 3070 coming later this year

OdinHades said:
Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.

Desktop version of the 3070 is over 220watts of power draw.
How in the hell will such a thing fit inside a laptop? Maybe there is a cutdown 3060 they call the 3070 in laptops, that runs lower clocks or such.

However a full 3070, running its normal desktop speeds.... that isnt going in a laptop.



Around the Network
JRPGfan said:
OdinHades said:
Looks really good. I'm looking forward to get a new laptop with RTX 3070 sometime next year.

Desktop version of the 3070 is over 220watts of power draw.
How in the hell will such a thing fit inside a laptop? Maybe there is a cutdown 3060 they call the 3070 in laptops, that runs lower clocks or such.

However a full 3070, running its normal desktop speeds.... that isnt going in a laptop.

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.




www.youtube.com/@Pemalite

I'm quite certain they will figure something out until next year. Wouldn't mind too much if it has its drawbacks, but my GTX 1070 is getting somewhat slow. It's still fine and all, especially since I'm only using Full HD, but meh. Its age is showing.

Won't get a desktop since I'm travelling a lot.



唯一無二のRolStoppableに認められた、VGCの任天堂ファミリーの正式メンバーです。光栄に思います。

RIP recent 2080Ti buyers. I'll take that for 399$ now. It's like buying stock in February this year.



Pemalite said:

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.

The memory isn't particularly fast on the 3070 right out of the gate so you don't really save any Watts there. Also making a business plan on the assumption that "there will be enough binnable chips" is a recipe for failure.

If you need to downclock a 3070 chip so much for wattage, you better start with the much cheaper 3060 chip right from the start.



Around the Network
drkohler said:
Pemalite said:

Lower clocks, lower power DRAM, more aggressive binning with chips that can hit lower voltages.

The memory isn't particularly fast on the 3070 right out of the gate so you don't really save any Watts there. Also making a business plan on the assumption that "there will be enough binnable chips" is a recipe for failure.

If you need to downclock a 3070 chip so much for wattage, you better start with the much cheaper 3060 chip right from the start.

GPU's have a clockspeed+voltage relationship efficiency curve.

If you push clockrates out, then you need orders-of-magnitude more voltage... Vega is a prime example of this, Vega 64 was actually an extremely efficient GPU architecture, especially when undervolted and underclocked and thus could find itself in integrated graphics...

But push voltage and clocks out and it's a power hungry monster.


Same thing with Polaris.
It started life out with the Radeon RX 480 at a modest 150w... But increased to 185w with the RX 580. You *might* have gained a couple of fps points, AMD decided to push clockspeeds from 1120Mhz to 1257Mhz, but needed to increase voltages to maintain yields and thus cost an additional 35w.

And I think we are seeing that same tactic with ampere, obviously I cannot verify as I don't have an ampere based GPU, so I am not 100% sure what it's voltage/clock/efficiency curve looks like, only speculating.

Notebooks obviously run at lower TDP's, so an adjustment to clockrates and voltages is always one of the first things to happen. - But it's a balancing act, making the chip smaller but at a higher clockrate doesn't mean it will use less energy or end up faster and cheaper to produce than a larger chip at a lower clock.

The desktop RTX 2080 Super is a 3072@1650Mhz core design fed by 8GB GDDR6@496GB/s.

The mobile variant of the RTX 2080 Super is also a 3072 core design, but clockrates are lowered to 1365Mhz or a reduction of 20% with 8GB GDDR6@448GB/s a reduction of 10%.

The mobile variant is 150w, desktop is 250w.

nVidia managed to shave 100w or 66% more efficient by lowering core clocks by 20%, memory clocks by 10% with an accompanying reduction in voltage.

Yes we could argue nVidia might have been better off just taking a smaller GPU like the vanilla RTX 2070 which on the desktop is a 2304@1410Mhz core design with 8GB of GDDR6 @ 448GB/s...
And yet the desktop RTX 2070 despite having the same memory setup as the mobile RTX 2080 Super, hits around the same performance level but with reduced core counts and still works out to be 175W TDP.

There are other aspects we need to consider as well, functional units can often be tied to CUDA core counts like RT cores or Polymorph engines, which can have some consequences as well.

Either way... The point I am making is that it's not as simple as "Take a small chip and be done with". - Lots of considerations go into it and lots of extensive testing as well.
nVidia will have the best understanding of it's efficiency curves with it's various GPU architectures and would have done the necessary profiling to find the best bang for buck.

I probably waffled on longer than I originally intended to here... Apologies.




www.youtube.com/@Pemalite

I just needed to share this; and I think it fits well in this thread.

Enjoy: https://youtu.be/NbNZxX2MIFM



It's getting more apparent that Samsung 8nm is closer to  TSMC 12nm and these cards are power hungry hippos. Doesn't matter much for 3060/70 but a new Titan at 400W or a hot laptop gpu is not what the market needs. AMD has a good chance at raw performance if they use TSMC 7nm.



From what I can tell, at low resolutions like 1080p, the performance between 2080 Ti and 3080 isn't very big unless you include Ray Tracing. Like 14%sh.

At high resolutions like 1440p and specially at 4k, we start to see big gains in Rasterization and even bigger gains in Ray Tracing games. DF has a pretty good video about it.

Usually 25-35% faster than 2080 Ti at 4k on average in traditional Rasteriazation and 30-45% than 2080 Ti with Ray Tracing.

Last edited by Jizz_Beard_thePirate - on 16 September 2020

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850