| HoloDust said:
Now, if developers can train it on Ground Truth of their game in the future (I can envision something like Path Traced version of the game with 1024 or so rays cast per pixel, which GPUs can't run in real time, and then that being fed into DLSS5 as training data) |
I think the long term goal is to push more and more into efficient on-line learners rather than pre-trained models. Both Nvidia and AMD have had "Neural Radiance Caching" for RTGI and path-tracing in their SDKs for about a year now, and Nvidia's solution has been implemented in Portal RTX and Quake III iirc.
The point of DLSS 5 seems to be to use a pre-trained model to blend the lighting and materials more cleanly, but it seems to me that just like with path-tracing, an on-line learner could do this better than a pre-trained model (and would be very game-specific/art-style specific because it learns from the game's data as you play.)







