drkohler said:
According to Digital Foundry, NVidia used two 5090s for the demo, one for the graphics, one for the AI. That instantly means if you have only one 5090 gpu, a massive downgrade in resolution is needed. If you have less than a 5090 gpu (which likely 99% of the people have), then the whole thing becomes more or less unusable. Also the thing is tied to frame generation.. |
This is obviously not going to be the case in the product release. With work-in-progress models like this, a reduction in model sizes in both memory consumption and parameter counts happen very rapidly as the model goes through a process of knowledge distillation iteratively.
We also have to remember that the tensor cores are the main processing units for DLSS/AI workloads, and they are a very small share of all of an RTX 5090's cores. It's quite possible that they just needed extra tensor throughput in the current iteration and they chose RTX 5090's for the demo because they don't want to have any issues.







