Would like to point out again, that it doesn't take a data-center's worth of compute to train sub-billion parameter models, which DLSS models are because they have to run in real-time.
AlexNet was trained on a gaming PC with two GTX 580s in 2010 for about a week, and that is probably on par with Nvidia's CNN DLSS in terms of parameter count (60M.) The DLSS transformer model is about twice that size. Nvidia did use a super-computer for DLSS 1 but that was mostly to render the synthetic data, which they probably do need to extend with newer iterations, but can still reuse old data.
Also if Deep Learning died tomorrow, there are a dozens of other use-cases for these data centers. Nvidia GPUs aren't FPGAs, they are fully-programmable and can run many different compute workloads. You could see them being used for CGI, scientific computing, big data processing, etc. Nvidia is probably the most hedged of the AI companies.







