By using this site, you agree to our Privacy Policy and our Terms of Use. Close
haxxiy said:
JEMC said:

Thanks. But we're still comparing a 175W card on a laptop to a desktop card using 300W, almost twice the power. It's an impressive improvement in efficiency.

When you think that the 3090 Ti had the same efficiency as the Turing GPUs and even some of the better Pascals, that was a sorely needed improvement and a very necessary one.

Makes me wonder if that's what Nvidia is paying a premium to TSMC for with the N4 node (since AMD's improvements in efficiency were more like 30% rather than 60%) or if it's mostly architecture.

If I remember correctly, you follow this news more than I do, the node Nvidia used is nothing more than a more refined 5nm one, not a "true" (if that term can even be used) new node. I can't see this being the only reason for the improved efficiency Ada shows when dialed back to its most efficient point.



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.