Yes.
The problem with the first test was that he used a AI trained for generic movies.
The key point is to train the net for specific content.
The developers provide tons of 4k, antialiased, etc images to the net and the respective 360p/720p/1080p native image. train a week or two. And load the model along with the game. The result would be far, far better. Train to avoid jaggies while adding details would be piece of cake, specially if it only have to resolve types of images you know to encounter in the game.
I dont know where the processing power for forward a image in a net would be, I first supose that was a lot, so It could be incorporated to the dock(as it is resolved after rendering time), so, there would be a higher price dock that would upscale to 4k.(as optional accessory for high end users)
However, the video remembered me that new mobile chips of NVIDIA bring also the tensor cores embeded in the SoC, it could be the switch system, and far more cheaper than including in a dock specially for it...
Anyway, I already considered that. Even before the RTX boards with DLSS:
http://gamrconnect.vgchartz.com/thread.php?id=236112&page=1







