If the measure of "AI filter" is that it works on 2D image data + buffer data, then all DLSS with the exception of Ray Reconstruction are "AI filters."
Personally I think having access to buffer data is a very important difference between this and the "AI filters."
I also think that the final release version probably will end up having G-buffer data in its training set and inference to solve some of the criticisms we've been seeing, if it doesn't already (that is still ambiguous.)
I think Nvidia's idea was that neural shaders would be responsible for pre-processing of materials and lighting, then DLSS5 would be a final touch up, but they really should just merge the technologies. If DLSS RR can be a DLSS (brand-wise) despite not being purely post-processed, then so can neural shaders.
Or maybe it is time for DLSS to be abandoned as a brand and just go back to describing their super-sampler?







