By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - FSR 2.0! now with Temporal Up-Sampling! (and better image quality)

The only part that irks me about FSR, is it's overuse of sharpening. Trying to run FSR in Cyberpunk means you've got to deal with sharpening forced on, and though it gives you the clear option to turn it off, it won't, not until you disable FSR entirely. It also doesn't help that with the combined sharpening, you get a horrible jaggy Shimer in distant textures like wires or fence grates.

FSR 2 looks like it's going in a decent direction, but I do wish they would tone down on the sharpening.



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Around the Network
hinch said:

@Conina since the GPU on Aerith APU is RNDA 2 based I'd assume so.

Anantech has also done a comparison here - https://www.anandtech.com/show/17317/amd-teases-fsr-20-temporal-upscaling-tech-for-games-coming-in-q2

Side by side slider on performance modes for FSR 2 vs 1 and native and actually comes out on top in detail. Impressive stuff!

Yep saw that slider too.
I also agree, FSR 2.0 comes out ontop of Native, in that screen shot comparison.
(even if shadows are abit softer, the sharpness and detail increase, in other area's give it a leg up imo (looks better))



Pemalite said:

I will of course always prefer native resolution and a locked framerate, but lower-end PC's and devices don't get that privilege sadly.

The industry has definitely determined that a temporal approach is going to be the industry standard going forth, so it's now ubiquitous with AMD throwing it's hat in.

Generally if everyone is going a certain direction.... theres a reason for it :)
This has been proven to work great (nvidia got there first, but plenty of others are following).

Native isnt always a option, with smooth locked framerates.
Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option.



I have an RTX 2080 supermax PC, so I can play pretty much any game I want at close to max settings on everything, yet I prefer to use dlss if the game supports it because my computer doesn't then feel like the surface of the sun and my fans don't sound as loud as the horn of Helm's Deep. I'm happy for all that don't have Nvidia cards that are getting an improved experience with FSR 2.0! This is the future!



Dulfite said:

I have an RTX 2080 supermax PC, so I can play pretty much any game I want at close to max settings on everything, yet I prefer to use dlss if the game supports it because my computer doesn't then feel like the surface of the sun and my fans don't sound as loud as the horn of Helm's Deep. I'm happy for all that don't have Nvidia cards that are getting an improved experience with FSR 2.0! This is the future!

Rumors are saying the new top of the line series from nvidia will be 650+ watts (with OC cards hitting into the ~800 watt range)

Current 3090 is "only" 350 watts TDP  (~359 watts during gameing, and a max peak of ~450 watts).

"Surface of the sun" is no joke.
These things (GPUs) are only getting more and more power hungry (Yaaay! global warming! and electrisity prices!).

Newer AMD cards, are also rumored to be more power hungry than their current cards (though not near what 40xx series cards will be)
Hopefully theres going to be a massive jump in performance with new gen cards (to match the power draw increase).

But like, I can respect useing DLSS alone just to cut down on powerdraw/heat.



Around the Network
JRPGfan said:
Pemalite said:

I will of course always prefer native resolution and a locked framerate, but lower-end PC's and devices don't get that privilege sadly.

The industry has definitely determined that a temporal approach is going to be the industry standard going forth, so it's now ubiquitous with AMD throwing it's hat in.

Generally if everyone is going a certain direction.... theres a reason for it :)
This has been proven to work great (nvidia got there first, but plenty of others are following).

Native isnt always a option, with smooth locked framerates.
Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option.

We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. 



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Chazore said:
JRPGfan said:

Generally if everyone is going a certain direction.... theres a reason for it :)
This has been proven to work great (nvidia got there first, but plenty of others are following).

Native isnt always a option, with smooth locked framerates.
Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option.

We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. 

I look forwards to comparisons as well.

*also no matter how good DLSS is, theres no way for current gen consoles (PS5/XSX) to make use of it.
They could make use of FSR 2.0 however.

Last edited by JRPGfan - on 18 March 2022

JRPGfan said:

I look forwards to comparisons as well.

*also no matter how good DLSS is, theres no way for current gen consoles (PS5/XSX) to make use of it.
They could make use of FSR 2.0 however.

I know, but that's why the primary comparisons are going to be made on PC with the two. Consoles will make use of FSR, but it's going to be PC that ends up using both and where most of those comparisons will be made from time to time. 



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Oh wow, so it might after all be possible to come fairly close to the performance gains provided by DLSS2 without the necessary hardware parts.

I assumed FSR2 would be closer to a widely adopted version of Insomniac's Temporal Injection method, but it's promising to be a bit more! Looking forward to comparisons vs UE5's TSR and DLSS2.



JRPGfan said:

Generally if everyone is going a certain direction.... theres a reason for it :)
This has been proven to work great (nvidia got there first, but plenty of others are following).

The benefits are pretty empirical at this point.

JRPGfan said:

Native isnt always a option, with smooth locked framerates.

Native is always an option for me.. But I am sure I don't need to stipulate that consoles and lower-end PC's would be the target audiences again.

JRPGfan said:

Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option.

Well. There are sometimes artifacts in the rendering, something which your normal person sitting on a couch a few meters away from the display may not notice with an untrained eye.
DLSS will also sometimes "break up" in highly complex scenes with lots of movement... And many developers will over-sharpen the visuals unnaturally which causes sharpening artifacts... I.E. Red Dead Redemption 2 there are flickering artifacts in the trees. God of War is another culprit.

There are allot of reasons to stick with native... But not everyone is as fussy as I am, sometimes "close enough" is "good enough" for allot of people.

The absolute best approach if you have the horsepower (And I probably need to stipulate this twice...) is supersampling over DLSS or Native.

JRPGfan said:

Rumors are saying the new top of the line series from nvidia will be 650+ watts (with OC cards hitting into the ~800 watt range)

Current 3090 is "only" 350 watts TDP  (~359 watts during gameing, and a max peak of ~450 watts).

"Surface of the sun" is no joke.
These things (GPUs) are only getting more and more power hungry (Yaaay! global warming! and electrisity prices!).

Newer AMD cards, are also rumored to be more power hungry than their current cards (though not near what 40xx series cards will be)
Hopefully theres going to be a massive jump in performance with new gen cards (to match the power draw increase).

But like, I can respect useing DLSS alone just to cut down on powerdraw/heat.

Just remember... Don't conflate TDP with power consumption as they are not nearly the same thing.

Rumors are also to be taken with a grain of salt.

Chazore said:
JRPGfan said:

Generally if everyone is going a certain direction.... theres a reason for it :)
This has been proven to work great (nvidia got there first, but plenty of others are following).

Native isnt always a option, with smooth locked framerates.
Also DLSS and now FSR are so damn close to native (some area's they come out above), that theres very little reason not to use this tech if you have the option.

We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. 

They won't be able to match nVidia... Not initially anyway, this is AMD's first attempt, whilst nVidia has been working and improving this technology for years.
But they also probably don't need to match nVidia due to the widely-adoptable nature of AMD's technology on show here.

JRPGfan said:
Chazore said:

We've yet to see direct comparisons between FSR 2 vs DLSS 2.1 yet to know how close AMD is coming, and even then I'm still expecting Nvidia to come out ahead. 

I look forwards to comparisons as well.

*also no matter how good DLSS is, theres no way for current gen consoles (PS5/XSX) to make use of it.
They could make use of FSR 2.0 however.

That would be because of the requirement for tensor cores... Which are really adept at FMA operations... Which consequently can also be done on the regular CUDA cores... It's just not ideal, hence why nVidia demands Tensor cores to be present.

AMD however can get around this as their GPU's tend to be extremely proficient at math anyway... It still won't be as good as having Tensor cores though.

Kyuu said:

Oh wow, so it might after all be possible to come fairly close to the performance gains provided by DLSS2 without the necessary hardware parts.

I assumed FSR2 would be closer to a widely adopted version of Insomniac's Temporal Injection method, but it's promising to be a bit more! Looking forward to comparisons vs UE5's TSR and DLSS2.

Not entirely. AMD is going to be taking away some shader cores used for rendering the game to perform the FMA calculations, so there is going to be a corresponding hit to performance due to less hardware available for rendering.
nVidia avoids this by performing those calculations on dedicated cores.

Each approach has Pro's and Con's...



--::{PC Gaming Master Race}::--