By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Soundwave said:
sc94597 said:

Why "except Nintendo?" They're fully aligned with Nvidia. SW3 will almost certainly have a Nvidia chip and neural rendering features. It'll be interesting to see the Nintendo-primary users in this thread navigate this. I am sure there will be some arbitrary distinction of why it isn't gen-AI. 

Nintendo doesn't have much control over this stuff anyway, they will eventually just have to go whatever way the broader industry goes. 

My guess is they will make a Switch 2 Pro at some point this generation provided RAM/silicon prices normalize some at some point to give the Switch 2 greater momentum as perhaps their last "traditional" kind of CPU/GPU driven hardware. Switch 3 (or whatever it is) will be a more radical leap towards a neural rendering pipeline (we are talking like 2032 or 2033 or something) as that's likely where Nvidia is going. The rise of the NPU (CUDA core) driven silicon and less of the actual GPU. 

Eventually I think the NPUs will be able to infer and create visuals from even lower end inputs but even the current Switch 2 can output PS5 range visuals as is, that is likely more than enough reference data already to give a bunch of generative AI tuned NPU cores a base to "upscale" from. Like I said a "Switch 3" may just be a recycled, die shrunk Switch 2 GPU that is dirt cheap by 2033 and the real upgrade is a bunch of NPU/CUDA cores that handle the real-time "filter effects" to bring the visuals up and then maybe an improvement on the CPU side. 

They call this DLSS5 but really we know that's just marketing, this is really like Neural Rendering 1 or something else, likely just like the original DLSS, the second or third iteration of this technique will see large gains too, the first version of DLSS kind of sucked and then it took off with DLSS2 and 3. 

So the neat thing is that the tensor cores are technically already an NPU in all but the fact that they are integrated on the GPU die and not separate. You're probably right in that future chips will be tensor-core heavy. 

UE5 already has its own neural shaders within the engine, and Nvidia's working on a fork of UE5 to bring in their neural shaders, with them likely merging down the line. 

I think the future is going to be online training of thousands of small neural-shaders. Right now neural shaders are pre-trained small MLPs (very small few hundreds to thousands of parameters) so as the hardware becomes more capable they probably should just be trained online, with some sort of abstracted API available to the developer to set the objectives of each shader. 

Then the final pass would just be a post-processing clean-up. Nvidia already has an online learner in the form of Neural Radiance Caching

DLSS 5 seems like a stop-gap solution to me until the developer tools and developer experience are more mature for direct neural rendering. I can actually see game companies hiring Machine Learning Engineers to their staff to help the 3D Rendering Engineers implement artistic visions from the artists. The development and testing environments just need to mature. 

Edit: And AMD has direct equivalents as well.