
Wow. The example picture alone already shows what’s wrong.
DLSS off: the background is rainy, the “cigare[ttes]” thing and the delicatessens sign are weathered, there’s some blue plastic in the background, she’s wearing brown, her eyes and lips lack any shine. This scene is clearly representing a tired, weary, “soulless” reality; one you survive but not live, that makes you whisper to yourself “…I’m so bloody tired”…
DLSS on: throws the mood out of the window by adding OH-SO-SHINY!!! everywhere.
This is not a breakthrough. This is not fidelity. It’s butchering artistic intent.
Nvidia is not a gaming company, they are a money company. Abandon them at all costs, it very well may cost everything
Good! Fuck this noise.
Good. I don’t mind using AI to upscale the image in video games, as long as it’s just improving the quality while retaining the original design. This changes the design way too much, and ends up making all of these characters look the same as each other. For some reason it makes everything look like a Coca-Cola Christmas commercial and I hate it.
That’s because those commercials have been AI slop for the last few years too.
Radeon, if you want to have a moment…
This was inevitable. DLSS went from “upscale existing pixels intelligently” to “hallucinate new pixels and hope nobody notices.” Of course people noticed.
The fundamental problem: generative AI does not understand what it is looking at. It sees patterns and fills them in. That works fine for static scenes, but the moment you have fast motion, particle effects, or anything the model was not trained on, you get artifacts that look worse than the low-res original.
Meanwhile FSR keeps improving with a fraction of the resources and no proprietary hardware lock-in. FSR 4 on RDNA 4 is genuinely competitive now, and it works on any GPU.
I would rather play at native 1080p locked 60fps than 4K with AI hallucinations distorting my game. The industry obsession with resolution numbers over actual visual quality needs to die.
They were just starting to win people over with the simple ai upscaling and then they pull bullshit like this. It isn’t even going to be what the released product looks like.
AI already hit the indie game market like a ton of shit bricks.
It’s lame, but I can’t help wondering if in this particular case gamers are just peeved that the character looks older.
Is there more detail on the process beyond what’s in the blog post? I could see a scenario in which the training data was just generated by running multiple playthroughs on a $500,000 GPU at impossible quality, creating a copy of what that would look like on a mid-range GPU, and then training a model. I’m not sure I would object to that.
deleted by creator






