By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - DLSS vs. FSR vs. XeSS

 

The best option is:

DLSS 14 87.50%
 
FSR 1 6.25%
 
XeSS 0 0%
 
Other 1 6.25%
 
Total:16

What is the better AI Upscaler?

Personally, I tend to use DLSS after I got my RTX, but when I didn't had one, FSR gave me quite a boost in FPS for a lot of games. Never tried XeSS, but I decided to put it on the poll too for people who know it.



Around the Network

No experience with XeSS, but DLSS trounces FSR, there's no comparison, to my eyes anyway.



I mean, Nvidia is now an AI research and development company before it is anything else. It's one of the few companies that is a powerhouse in fundamental research (probably only bested by Google Deep-mind, although they specialize in different domains.) They're going to sustain their advantage in DLSS and neural-rendering for the foreseeable future. AMD is finally catching up to be somewhat competitive, and Intel has been more or less close to AMD (up until recently ahead of them, now behind.) 



If you can use FSR4 or DLSS4 transformer models, you are well off no matter which you choose.
Typically you can only pick one or the other, depending on your GPU.

DLSS is still a tiny bit better, but the gap is massively closed recently, with Redstone.
I think if you take 99% of people, and do a blind test, DLSS vs FSR4, have them guess, there wouldn't be a clear winner.


Here is a video between the two:



https://www.youtube.com/watch?v=IpNBmPwFh1I

watch it can compare yourself.



curl-6 said:

No experience with XeSS, but DLSS trounces FSR, there's no comparison, to my eyes anyway.

I also never used XeSS, but it looks like it's an AI made by Intel. So, since it does the same work as DLSS and FSR, I decided to add it to the poll.



Around the Network
sc94597 said:

I mean, Nvidia is now an AI research and development company before it is anything else. It's one of the few companies that is a powerhouse in fundamental research (probably only bested by Google Deep-mind, although they specialize in different domains.) They're going to sustain their advantage in DLSS and neural-rendering for the foreseeable future. AMD is finally catching up to be somewhat competitive, and Intel has been more or less close to AMD (up until recently ahead of them, now behind.) 

Your thinking of old FSR back before they had it based on AI models, from when it was just algorithms. 
Ei. FSR2 and 3.... back then, what you wrote was true.

Nowadays its not.

Its like DLSS transformer model > FSR 4 >>>>>>>>>>>>>>>> XeSS (intel).

DLSS and FSR are like neck and neck, and intel is far behind.


curl-6 said:

No experience with XeSS, but DLSS trounces FSR, there's no comparison, to my eyes anyway.


^ your probably also thinking FSR2 or 3....



JRPGfan said:
sc94597 said:

I mean, Nvidia is now an AI research and development company before it is anything else. It's one of the few companies that is a powerhouse in fundamental research (probably only bested by Google Deep-mind, although they specialize in different domains.) They're going to sustain their advantage in DLSS and neural-rendering for the foreseeable future. AMD is finally catching up to be somewhat competitive, and Intel has been more or less close to AMD (up until recently ahead of them, now behind.) 

Your thinking of old FSR back before they had it based on AI models, from when it was just algorithms. 
Ei. FSR2 and 3.... back then, what you wrote was true.

Nowadays its not.

Its like DLSS transformer model > FSR 4 >>>>>>>>>>>>>>>> XeSS (intel).

DLSS and FSR are like neck and neck, and intel is far behind.

I am actually not thinking of FSR 1-3, but rather how each company has handled their DL suites in general. There is much more to the DLSS suite and Nvidia's plans for deep-learning assisted graphics than what we have currently, and it's only recently (a week ago) that AMD has caught up in parity with what Nvidia has had for years. AMD's answer to ray reconstruction - a feature Nvidia has had for three years (since the Lovelace launch) - just released a week ago, as an example.  

If you read Nvidia's white-papers you basically get a preview of what they will release with each DLSS iteration.

https://research.nvidia.com/publications



DLSS 4 or 3.5 >>> FSR 4 >>> DLSS 3 >>> XESS >>> DLSS 2 >>> FSR 3.

Pretty much in that order.

Just asking whether DLSS is better than FSR or XESS isn't looking at the significant changes between versions... And many games dont get updated to use the newer upscaling models either.

However... Unlike DLSS... XESS and FSR 3 and earlier can run on older hardware that doesn't have FP8 support, so that's a big advantage.




www.youtube.com/@Pemalite

Just a few examples of what I am talking about with Nvidia being a power-house in vision models. 

https://research.nvidia.com/publication/2025-06_mambavision-hybrid-mamba-transformer-vision-backbone

We propose a novel hybrid Mamba-Transformer backbone, MambaVision, specifically tailored for vision applications. Our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. Through a comprehensive ablation study, we demonstrate the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results show that equipping the Mamba architecture with self-attention blocks in the final layers greatly improves its capacity to capture long-range spatial dependencies. Based on these findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria. For classification on the ImageNet-1K dataset, MambaVision variants achieve state-of-the-art (SOTA) performance in terms of both Top-1 accuracy and throughput. In downstream tasks such as object detection, instance segmentation, and semantic segmentation on MS COCO and ADE20K datasets, MambaVision outperforms comparably sized backbones while demonstrating favorable performance.

https://research.nvidia.com/publication/2025-11_attention-sphere

We introduce a generalized attention mechanism for spherical domains, enabling Transformer architectures to natively process data defined on the two-dimensional sphere - a critical need in fields such as atmospheric physics, cosmology, and robotics, where preserving spherical symmetries and topology is essential for physical accuracy. By integrating numerical quadrature weights into the attention mechanism, we obtain a geometrically faithful spherical attention that is approximately rotationally equivariant, providing strong inductive biases and leading to better performance than Cartesian approaches. To further enhance both scalability and model performance, we propose neighborhood attention on the sphere, which confines interactions to geodesic neighborhoods. This approach reduces computational complexity and introduces the additional inductive bias for locality, while retaining the symmetry properties of our method. We provide optimized CUDA kernels and memory-efficient implementations to ensure practical applicability. The method is validated on three diverse tasks: simulating shallow water equations on the rotating sphere, spherical image segmentation, and spherical depth estimation. Across all tasks, our spherical Transformers consistently outperform their planar counterparts, highlighting the advantage of geometric priors for learning on spherical domains.

https://research.nvidia.com/publication/2025-07_radiance-surfaces-optimizing-surface-representations-5d-radiance-field-loss

We present a fast and simple technique to convert images into a radiance surface-based scene representation. Building on existing radiance volume reconstruction algorithms, we introduce a subtle yet impactful modification of the loss function requiring changes to only a few lines of code: instead of integrating the radiance field along rays and supervising the resulting images, we project the training images into the scene to directly supervise the spatio-directional radiance field. The primary outcome of this change is the complete removal of alpha blending and ray marching from the image formation model, instead moving these steps into the loss computation. In addition to promoting convergence to surfaces, this formulation assigns explicit semantic meaning to 2D subsets of the radiance field, turning them into well-defined radiance surfaces. We finally extract a level set from this representation, which results in a high-quality radiance surface model. Our method retains much of the speed and quality of the baseline algorithm. For instance, a suitably modified variant of Instant NGP maintains comparable computational efficiency, while achieving an average PSNR that is only 0.1 dB lower. Most importantly, our method generates explicit surfaces in place of an exponential volume, doing so with a level of simplicity not seen in prior work.


https://research.nvidia.com/publication/2025-08_gaia-generative-animatable-interactive-avatars-expression-conditioned-gaussians

3D generative models of faces trained on in-the-wild image collections have improved greatly in recent times, offering better visual fidelity and view consistency. Making such generative models animatable is a hard yet rewarding task, with applications in virtual AI agents, character animation, and telepresence. However, it is not trivial to learn a well-behaved animation model with the generative setting, as the learned latent space aims to best capture the data distribution, often omitting details such as dynamic appearance and entangling animation with other factors that affect controllability. We present GAIA: Generative Animatable Interactive Avatars, which is able to generate high-fidelity 3D head avatars for both realistic animation and rendering. To achieve consistency during animation, we learn to generate Gaussians embedded in an underlying morphable model for human heads via a shared UV parameterization. For modeling realistic animation, we further design the generator to learn expression-conditioned details for both geometric deformation and dynamic appearance. Finally, facing an inevitable entanglement problem between facial identity and expression, we propose a novel two-branch architecture that encourages the generator to disentangle identity and expression. On existing benchmarks, GAIA achieves state-of-the-art performance in visual quality as well as realistic animation. The generated Gaussian-based avatar supports highly efficient animation and rendering, making it readily available for interactive animation and appearance editing.

https://research.nvidia.com/publication/2025-07_generative-detail-enhancement-physically-based-materials

We present a tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering. Our goal is to enhance the visual fidelity of materials with detail that is often tedious to author, by adding signs of wear, aging, weathering, etc. As these appearance details are often rooted in real-world processes, we leverage a generative image model trained on a large dataset of natural images with corresponding visuals in context. Starting with a given geometry, UV mapping, and basic appearance, we render multiple views of the object. We use these views, together with an appearance-defining text prompt, to condition a diffusion model. The details it generates are then backpropagated from the enhanced images to the material parameters via inverse differentiable rendering. For inverse rendering to be successful, the generated appearance has to be consistent across all the images. We propose two priors to address the multi-view consistency of the diffusion model. First, we ensure that the initial noise that seeds the diffusion process is itself consistent across views by integrating it from a view-independent UV space. Second, we enforce geometric consistency by biasing the attention mechanism via a projective constraint so that pixels attend strongly to their corresponding pixel locations in other views. Our approach does not require any training or finetuning of the diffusion model, is agnostic of the material model used, and the enhanced material properties, i.e., 2D PBR textures, can be further edited by artists.

https://research.nvidia.com/publication/2025-06_gaurast-enhancing-gpu-triangle-rasterizers-accelerate-3d-gaussian-splatting

3D intelligence leverages rich 3D features and stands as a promising frontier in AI, with 3D rendering fundamental to many downstream applications. 3D Gaussian Splatting (3DGS), an emerging high-quality 3D rendering method, requires significant computation, making real-time execution on existing GPU-equipped edge devices infeasible. Previous efforts to accelerate 3DGS rely on dedicated accelerators that require substantial integration overhead and hardware costs. This work proposes an acceleration strategy that leverages the similarities between the 3DGS pipeline and the highly optimized conventional graphics pipeline in modern GPUs. Instead of developing a dedicated accelerator, we enhance existing GPU rasterizer hardware to efficiently support 3DGS operations. Our results demonstrate a 23× increase in processing speed and a 24× reduction in energy consumption, with improvements yielding 6× faster end-to-end runtime for the original 3DGS algorithm and 4× for the latest efficiency-improved pipeline, achieving 24 FPS and 46 FPS respectively. These enhancements incur only a minimal area overhead of 0.2% relative to the entire SoC chip area, underscoring the practicality and efficiency of our approach for enabling 3DGS rendering on resource-constrained platforms.



For the most part latest DLSS transformer model, though FSR4 still does some things better.

From my experience, both still have problems with some things, even when they run AA at native, which standard TAA doesn't have.