- cross-posted to:
- pcgaming@lemmy.ca
- cross-posted to:
- pcgaming@lemmy.ca
They don’t care. Nobody thinks this is a meaningful announcement, it’s just another pretense to justify AI as Nvidia and much of the entire US economy are so deeply sunken-cost invested in the “future AI ecosystem” that they just need any headlines to keep the machine churning. That’s what this is. Doesn’t mean it isn’t real and gamers won’t have to suffer with it, but no point of this is actually to make anyone’s gaming experience better in any way.
Someone run it over jensens keynote. See how he likes it.
Who gives a fuck? I can play your game “as intended” or my way. Mods make games awesome!
If someone wants to play with AI slop mode fully enabled, that’s their choice and their prerogative and I’m assuming they bought the game so how about they just enjoy it however the hell they want to.
From brain rot to slop. The word of the year, everybody.
Personally, I only care that it looks like a deepfake. And that alone is giving me the ick. Like I would pay more to have it NOT look like this.
I saw my tag and have to say, you don’t make for a very convincing Nvidia shill bot. You should try making a new account, again.
As long as you pay more, I’m sure they’re happy either way!
Finally, the slopware generation
Being young today must be fun. Just missed out on all the good stuff in the 80, 90s and 2000s to experience this mess we have now.
I was hanging out with my neighbour a few months back. When I listed off how fucked kids today are…
Then I remembered he has two young boys and they’re growing up with this dystopia as their present and future.
I apologised once I realized his family is directly affected.
He said “Oh I already know they’re fucked.”
I don’t see how this will stay consistent enough for art directors to sign off on it. It’s effectively just a hallucination based on your current video game frame.
Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.
Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.
What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details?
It’s not, DLSS5 takes a frame as rendered normally by your GPU and feeds it into a second $3k GPU to run the AI image transformer.
There is no performance benefit, in fact it adds a bit of latency to the process.
And Nvidia claims it will release without the need for a second 50 series card this year.
Lots of bullshit being laid out by Nvidia here.
Even then it will be a performance loss either way. It’s exactly the opposite of what DLSS used to be for.
Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.
I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.
My two cents
Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.
I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.
I expect a lot of nonsense being hallucinated in those areas.
It’s not an ‘LLM’ (large language model). 🤦
I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?
Generative AI is a decent catch-all that I think would apply to this.
Another good option is “machine learning” or ML, but that’s fallen out of favor cause it doesn’t sound as impressive as AI. But really it’s teaching a machine to do a specific task. It’s not intelligent, it’s just that we don’t understand how it learns.
Neural network would be the most technically accurate given what they’ve announced so far.
There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.
Saw somewhere a mention of it doing only 1 pass. Stable diffusion takes 30-100+ passes, so this sounds like fast inpainting rather than actual generation

Nvidia DLSS (Deep Learning Super Sloppificator)
Double Latency Slow Sloppification.
Personally, I don’t know much about this technology. That said, I’ve heard that the original purpose of DLSS was to improve gaming performance, give you more FPS, and so on.
In that sense, many—myself included—are wondering: How is this slop generator going to improve game performance? How is giving Grace from RE9 a totally different face with make up on going to improve my gaming experience?
It makes “I fixed this ugly FEMALE character” chuds happy and that’s what matters to them.
All of this upscaling when it was presented over a decade ago was to give older cards a longer lease in life and now it’s morphed into the mandatory way to get a stable framerate since developers can now just rely on DLSS and to a lesser extent FSR to get them to a acceptable framerate instead of optimization.
As for how will this improve the gaming experience? I honestly don’t see it at this point. Back when it was the original goal, sure, now with this “Chat GPT moment for graphics”, I see it only beneficial for corporate parasites and “shareholder value” as we wave good bye to artistic vision as everything goes to looking like AI only fans.
Saves payroll on artists which means more profits. #winning
The artists are still necessary for the slop generator to have aomething ro slop all over.
I think the idea is that you could use low resolution / detail models that take up less RAM and are faster to process and DLSS hallucinates a high res image.
Not a fan of reimagined stuff of any sort, it usually doesn’t hit well. But from a tech standpoint I can think of ways one could use the tech to improve game performance for new games. Usually making a game run faster or feel more realistic is all about fooling the player, not drawing what’s not seeable, showing hints of things that aren’t really there. Hell, that’s been true for movies and even stage, right?
So my thought on how this could work is to have the actual core models be lower polys, enough for details but not as high as the best we’ve seen done, and minimal texturing. Then the generator uses that as a base to form the image it puts over the top. Still don’t see how that can be done that fast, but apparently we’re there now.
The problem I see is consistency. Whatever the AI generates for a given source won’t be consistent throughout the game. Even in the original Digital Foundry video, you can see how Grace’s face looks like a totally different person depending on the distance from which it’s viewed.
The artistic style is supposed to solve that consistency issue, but this AI is ruining it.
(Also, in the same video, you can see they were using two 5090s to run the DLSS 5 games, so…)
I find it interesting that the AI parts of the video have very little video in them. They have the original game moving along, and then they show the AI version and mostly keep it as a still. I suspect that they did this so that you can’t do a side-by-side comparison and see that the AI version doesn’t actually play as well as the original version.
Also, I’ve got to wonder about how it must feel to be an artist who worked on one of these games, and watch the thing you carefully hand-tuned to match the artistic vision of the game design be replaced by the mindless addition of wrinkles.
I think Jensen said during a presentation a long time ago that a long term goal was to have graphics being entirely generated. This seems like a big step towards that.
Nvidia realized they hit a wall on rendering… So they went full on into something that you can’t properly benchmark - especially against competitors AND prior generations.
Nvidia went full snake oil… And really would like people not to notice.
Never cared for realism in games to begin with, so don’t particularly care to comment on how it looks, but I’ve been thinking that I genuinely find it creepy.
Not just Uncanny Valley material, but also having these faces stare at you, always fully lit, it just gives me the creeps, kind of like a panopticon situation.
I don’t fucking know, if that’s my own trauma playing into that, where for the longest time, people looking at me generally meant they’re about to bully me.But either way, I’m about to head to bed and genuinely feel like there’s a 20% chance I’ll have a mild nightmare from that shit.
This whole AI craze has been a wild ride of all kinds of nightmare fuel, from depictions with missing/additional limbs to the weirdest warping of objects+limbs in those fucking generated videos. And the worst part is that some folks seem to just not see it or not want to see it, so they keep using the nightmare generators.I guess by this description that perhaps it would be good for something like resident evil that strives to give you that uncanny creepiness no?
Don’t think the playable characters are supposed to be creepy, but yeah, a yassified zombie would probably fit right in. 😅
Everything tech bros touch, dies
Aren’t video games Art? Why does nvidia think it knows better than the artists? . And maybe I am the only one, but DLSS has never made any of my games look better ever. It’s always looked worse, and I now permanently turn that shit off and regret spending a 20 percent higher price for something I don’t use. I don’t usually agree with IGN articles, but this one hits it on the head.
Easy to avoid nvidia products these days anyway. Overpriced, marginal performance over AMD, and tied to AI slop globally. Never been so easy to vote with your wallet.
That would only work i they still gave a shit about gamers or the consumer market in general. They became a full AI Datacenter company, thats where they make the money. I don’t think they will even produce consumer graphics cards in 5 years anymore.













