By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Chazore said:
Captain_Yuri said:

Yea the glory days are pretty much gone and the uplifts in performance gen on gen should be slowing down. Maxwell > Pascal went from 28nm > 16nm. We ain't gonna see jumps like that anymore and with fabs continuously raising prices while having smaller jumps every generation, it will get reflected in the products one way or another until the engineers find ways to reduce costs. I suppose DLSS is certainly a band-aid in that sense but unless we see a revolutionary node strink, I don't see how we gonna ever get gen on gen uplifts like those days.

That's what I've been trying to tell you for months now lol. I really don't think we should be relying on DLSS for like say, a decade, because I know how humanity works, we rely on something for some time, take it for granted and we end up getting stale and complacent. 

By the time the next node shrink comes around, we should be moving on from DLSS to something else entirely.

Well the problem is that node shrinks isn't something Nvidia has any control over. That is entirely upto the fabs like TSMC and Samsung. These days Nvidia is on the cutting edge node which also means that the next gen's performance leap won't be as big because node shrinks have slowed significantly. So the only way Nvidia can gain significant performance is to somehow build an architecture with that gives massive improvements in brute force while being on a node that has minimal improvements over the previous one relative to the past. Maxwell to Pascal went from 28nm to 16nm. Lovelace to Blackwell is going from 5nm to 3nm and that's not Nvidia's fault because TSMC has nothing more advanced than 3nm. It will be interesting to see if they can do that but it's not like consumers have any power over fabs since TSMC/Samsung/Intel aren't purposefully cockblocking node shrinks.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850