| Soundwave said: That may only be a problem for a temporary period though. AMD I'm sure will make their own knock off of this, just as developers support both DLSS upscaling and FSR in the same game, but Nvidia has something like a 90% marketshare of PC GPUs anyway. And then a closed system like a hypothetical Switch 3 with things like exclusive Nintendo games that aren't on other platforms ... you may never, ever know any better. In fact I would say for a system like that it would be stupid to even try to focus on path tracing/ray tracing at all. Let the generative AI handle all the lighting, if PS4/5 range baked lighting enough reference data for it to create a look that imitates path tracing to the point that Digital Foundry was fooled ... well I mean it's hard to really justify the performance cost of ray/path tracing. Now it does look to me like the algorithm Nvidia is using is trained to over light and create a vivid picture that "pops" on purpose, it just looks at every scene and goes "Imma make this look flashy as fuck", but I hate to say it, most consumers are going to be happy with that. Most people, even graphics enthusiasts don't really care if a scene looks "accurate", they want it to look eye pleasing, and I think that's all Nvidia is going for. |
The issue is that AMD and Nvidia will have two different models that achieve different results, and third party developers would have to work with both to get their desired result. That requires more labor, not less.
My point about consistency was more about temporal consistency. Image to image, and text to image models aren't very temporally consistent in real-time. That is largely because they don't have enough input features to work from. DLSS 5 at the very least has the buffer data. There will be actual limits to how little data they can use with thr DLSS 5 method.
Nvidia seems to be moving in the direction of smaller, more numerous, modular, pretrained shaders and materials and then online learners to accelerate path-tracing by filling in gaps. If you follow their white-paper history that has been the consistent trend. This is also the direction AMD is following, and it has been built in directly into the APIs like Direct X12 and Vulkan. This allows for neural-rendering to not be vendor locked, while also still allowing for vendors to use their own specific hardware implementations to accelerate it. It means developers also don't have to do vendor specific implementations.
DLSS 5 seems to be more of a complementary technology and stop-gap until actual neural rendering pipelines are fully mature.







