| mutantsushi said: The CG artist´s perspective was relevant, although it seems he makes a flawed assumption that too many others also do: |
It probably is a screen-space effect, like is the case with all DLSS models (outside Ray-Reconstruction.) I don't think the engine is feeding in game-data on material properties and light sources and then these are corrected before rendering the frame, as an example. It's not a PINN (Physics Informed Neural Network.) But I also think there are limits to what 3D artists and tech media personalities know about deep-learning models and they think this is a big-gotcha.
Image-segmentation/semantics feature modeling is extremely good in the year 2026, even for tiny models, and Nvidia has already mentioned that this model uses semantics to help it obtain information about objects in the scene. This is a far more efficient way to do it than to feed it heavy game-data.
Sure there is a risk of hallucination going this route, but what they're missing is that training objectives matter. Image and video models, even "filters", aren't trained on the same objectives as any DLSS version, including DLSS 5. There is no reason for one to believe that two models pre-trained on different objectives and with different data distributions, even if they use the same neural-network architecture, will have the same limitations.
Last edited by sc94597 - 3 days ago






