By using this site, you agree to our Privacy Policy and our Terms of Use. Close
haxxiy said:

Not going to lie, performance per watt for the Ampere cards is a bit lame. In theory that's one and a half die shrinks and ~22% is all we got. This decade might see GPUs go the way of CPUs (in terms of yearly perf increases) if this keeps up, and it should, considering the next nodes have even worse theoretical power consumption improvements.

I mean, that's good for AMD, since their resuts were the same-ish with the Radeon VII. I guess that means RDNA was actually a huge step up compared to GCN if these power consumption characteristics are intrinsic to the node.

The improvements to GPU performance all around should certainly lessen as it gets harder and harder to get significant node shrinks. Nvidia has seen the writing on the wall which is why they are pushing something like DLSS to gain the performance back. While AMD tries to play catch up to Rasterization and Ray Tracing performance, Nvidia is trying to figure out what's next. And judging from the Reception that DLSS 2.0 has gotten, it could be the future.

The kicker this gen is probably going to be pricing based on the slides we have gotten from Xbox Series X Hot Chips. AMD is probably going to have similar pricing to Nvidia once again when it comes to their GPU while performing similar if not worse. Imo Nvidia's eventual solution might be that because GPUs are getting so expensive to make, if DLSS does manage to take off, then a person can buy say a 3060 and with DLSS can get 3080 native performance and similar visuals for half the price.

Now yes, Microsoft has Direct ML but we know that the ML performance on RDNA 2 is worse than Turing's Tensor cores judging based on INT-8 performance on Series X vs Turing. But the biggest problem is to get someone to use it before we can even judge how that performs in real world.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850