One of the fairer coverages of it.
Well, I was playing Requim and thought, "It's ok but I would enjoy this game more if Grace looked more like an ad for an AI girlfriend."
Now you idiots know how I feel when you play a 16-bit game with raw pixels and no CRT on a crappy emulator. :D :) :P
Why doesn't everyone just go to cell shading instead? Free up gpu and the art style will never age.


The reveal video on Youtube now sits at 17k likes to 89k dislikes.
Iam fully against using ai to create better looking graphics,there are many ai made games on psn and they all look soo gadamn boring and generic.
Give me 90s and 00s graphics anytime
| sc94597 said: One of the fairer coverages of it. |
Discourse died, at least in the US, back in 2016 and yes, it died on both sides. I feel surrounded by mindless morons who parrot the newest talking points and I rarely see independent thought.
*not directed at anyone in particular, just a general comment*
Edit
Also internet outrage needs to settle down. DLSS is optional. Dont like it, dont use it. Hell buy AMD.. not that hard.
This shit isn't even upscaling anymore. It's just having AI trace-over your videogame and "improve it". Redrawing entire facial features and animations from scratch. And all with that tacky, bland, AI look.
I for one don't find thid impressive but tather intrusive. DLSS wasn't criticised as much in the past because it wasn't getting in the way. If this is the future of DLSS, I'll stop at max 4, if i end up using it. I don't find the newer models more appealing in any which way. Not the environments, or the people. AC: Shadows should be a little murky imo. My fewr is overworked deve using this in lazy ways and making games worse. Just as drronger hardware led to optimisation going out the window in the past.
Just a guy who doesn't want to be bored. Also

One thing Nvidia probably should do is explain how they trained the model, what all of the inputs are, etc.
There are still people who think this is basically stable diffusion or nano banana applied to a fully processed output and therefore they think it inherits all of the ethical issues that those models have.
The facts, implied by the press release are that:
1. Like DLSS 2-4.5 this model has access to velocity buffers, depth buffers, color buffers, and luminance as inputs. Scraped images and videos don't provide all of this. So synthetic data is the bulk if not all of the datasets used to train.
2. To run in real time the model needs to be small, sub-billion parameters in size. So training isn't akin to the 6 months - year training runs using tens of thousands of GPUs like VLMs. So the "this is why you can't buy ram" argument doesn't apply here.
3. Same thing with child porn and deep fakes, this model is specific to game data.
4. The objective/target variable of video models doesn't work here. Video models are trained to take an input form an image or text and generate a video. This means the target distribution is much broader, a few OOM times broader, than changing game materials and lighting. It is also why temporal consistency is such an issue for video models, and why they can't generate in real time well even if you have the compute for it.
The makeup and "yasified" look comes from the fact that many games and 3D character models sexualize their characters.
At worst there might be some transfer learning from a video model, but the risk there is that it shifts the distribution too far that it doesn't work for games, so I don't think that is likely.
Last edited by sc94597 - 3 days ago