By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - I trained a Deep Learning Real Time Image Quality Cleaner

Hi all, I trained a deep learning model that cleans image quality output in games, while inferencing in real time. It works after the upscaler, rather than a substitute of it. Here are some examples from Cyberpunk 2077 on an RTX 4090m. Make sure to right click -> view on a larger screen. The model is a CNN-ViT hybrid model. It has a ViT encoder and CNN decoder. While the images have 30 fps in the top left, that was a locked framerate so that the training data is synchronized. It runs at 100fps (versus 130fps w/o) on an RTX 4090m @1080p. In this example, the original is FSR 2 Performance @1080p,while the cleaned image is improved. It is a moderate improvement, I'd say it is like going up a level from performance to balanced, balanced to quality, etc., for a given upscale mode. Currently I only trained on two games (Cyberpunk 2077 and Shadow of the Tomb Raider) but I plan to train on more. The improvement is mostly spatial, it doesn't get rid of temporal shimmering and other similar artifacts, but I hope to solve those too! I have an internal app that works like Lossless Scaling (captures the input using WGC and outputs with the cleaned image.) It works only on the final output, there is no buffer data other than the color buffer. The encoder model works as a stand-in for the various buffers. There are some temporal features, but again no motion vectors. 



Around the Network

SOTR



Looks really good!



Edit: Switch to 4k in YouTube to combat compression. 

Last edited by sc94597 - 4 hours ago