I highly reccomend searching a video what DLSS is, how it is used, if you’ve never seen it before.
DLSS 4.5 and before was a “transformer model” that used the pixels from your screen to predict what the surrounding pixels would be, allowing you to view games in higher resolution than your settings. Some people have shown it running a game at 512 pixels and it appear nearly identical to 1920, but with improved framerate. It struggled with motion in a lot of cases but what you were seeing was what the game developers intended for you to see.
DLSS 5 is not doing that, clearly. It’s pulling from something else for reference, some other kind of neural network or language model. The most common critique I hear about it is that it’s “overriding the art” by replacing everything with slop the likes of which you see from other bullshit AI.
I hate videos for information like that. I’d read an article though.
But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
Weren’t the previous versions also genAI?
I highly reccomend searching a video what DLSS is, how it is used, if you’ve never seen it before.
DLSS 4.5 and before was a “transformer model” that used the pixels from your screen to predict what the surrounding pixels would be, allowing you to view games in higher resolution than your settings. Some people have shown it running a game at 512 pixels and it appear nearly identical to 1920, but with improved framerate. It struggled with motion in a lot of cases but what you were seeing was what the game developers intended for you to see.
DLSS 5 is not doing that, clearly. It’s pulling from something else for reference, some other kind of neural network or language model. The most common critique I hear about it is that it’s “overriding the art” by replacing everything with slop the likes of which you see from other bullshit AI.
I hate videos for information like that. I’d read an article though.
But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.
Generative AI is a name for some ways you can use AI, not for its architecture.
There’s space to discuss if DLSS < 5 is it or not. But your argument is baseless.
The base for it is that it is generating pixels - and entire frames.
The difference between DLSS 5 and <5 seems quantitative, not qualitative.
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The effect looks like a filter or shader. I’ve seen the comparisons.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
I looked at the still comparisons on the nVIDIA article.