LegitHyperbole said:
Mind blowingly so. Game creatives and visionaries could test their ideas without needing to wait for stuff to be built, they'd then be able to communicate their vision more accurately. The amount of time and money this would save is immense and make the whole dev cycle shorter and less risky. |
Those diffusion models are the opposite of what you're suggesting.
The RL plays DOOM and the neural model learns to match the controls with the images. The neural models uses a diffusion approach, like Stable Diffusion image generator and learnt from 900-millions frames that the RL generated in its play-throughs.
First you got to build it, then play it, then the diffusion model can make a worse running copy...