JRPGfan said:
Generally if everyone is going a certain direction.... theres a reason for it :) This has been proven to work great (nvidia got there first, but plenty of others are following). |
The benefits are pretty empirical at this point.
JRPGfan said:
Native isnt always a option, with smooth locked framerates. |
Native is always an option for me.. But I am sure I don't need to stipulate that consoles and lower-end PC's would be the target audiences again.
JRPGfan said:
Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option. |
Well. There are sometimes artifacts in the rendering, something which your normal person sitting on a couch a few meters away from the display may not notice with an untrained eye.
DLSS will also sometimes "break up" in highly complex scenes with lots of movement... And many developers will over-sharpen the visuals unnaturally which causes sharpening artifacts... I.E. Red Dead Redemption 2 there are flickering artifacts in the trees. God of War is another culprit.
There are allot of reasons to stick with native... But not everyone is as fussy as I am, sometimes "close enough" is "good enough" for allot of people.
The absolute best approach if you have the horsepower (And I probably need to stipulate this twice...) is supersampling over DLSS or Native.
JRPGfan said:
Rumors are saying the new top of the line series from nvidia will be 650+ watts (with OC cards hitting into the ~800 watt range) Current 3090 is "only" 350 watts TDP (~359 watts during gameing, and a max peak of ~450 watts).
"Surface of the sun" is no joke. These things (GPUs) are only getting more and more power hungry (Yaaay! global warming! and electrisity prices!).
Newer AMD cards, are also rumored to be more power hungry than their current cards (though not near what 40xx series cards will be) Hopefully theres going to be a massive jump in performance with new gen cards (to match the power draw increase).
But like, I can respect useing DLSS alone just to cut down on powerdraw/heat. |
Just remember... Don't conflate TDP with power consumption as they are not nearly the same thing.
Rumors are also to be taken with a grain of salt.
Chazore said:
JRPGfan said:
Generally if everyone is going a certain direction.... theres a reason for it :) This has been proven to work great (nvidia got there first, but plenty of others are following).
Native isnt always a option, with smooth locked framerates. Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option. |
We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. |
They won't be able to match nVidia... Not initially anyway, this is AMD's first attempt, whilst nVidia has been working and improving this technology for years.
But they also probably don't need to match nVidia due to the widely-adoptable nature of AMD's technology on show here.
JRPGfan said:
Chazore said:
We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. |
I look forwards to comparisons as well.
*also no matter how good DLSS is, theres no way for current gen consoles (PS5/XSX) to make use of it. They could make use of FSR 2.0 however. |
That would be because of the requirement for tensor cores... Which are really adept at FMA operations... Which consequently can also be done on the regular CUDA cores... It's just not ideal, hence why nVidia demands Tensor cores to be present.
AMD however can get around this as their GPU's tend to be extremely proficient at math anyway... It still won't be as good as having Tensor cores though.
Kyuu said:
Oh wow, so it might after all be possible to come fairly close to the performance gains provided by DLSS2 without the necessary hardware parts.
I assumed FSR2 would be closer to a widely adopted version of Insomniac's Temporal Injection method, but it's promising to be a bit more! Looking forward to comparisons vs UE5's TSR and DLSS2. |
Not entirely. AMD is going to be taking away some shader cores used for rendering the game to perform the FMA calculations, so there is going to be a corresponding hit to performance due to less hardware available for rendering.
nVidia avoids this by performing those calculations on dedicated cores.
Each approach has Pro's and Con's...