• FiniteBanjo@feddit.online
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      9 hours ago

      I highly reccomend searching a video what DLSS is, how it is used, if you’ve never seen it before.

      DLSS 4.5 and before was a “transformer model” that used the pixels from your screen to predict what the surrounding pixels would be, allowing you to view games in higher resolution than your settings. Some people have shown it running a game at 512 pixels and it appear nearly identical to 1920, but with improved framerate. It struggled with motion in a lot of cases but what you were seeing was what the game developers intended for you to see.

      DLSS 5 is not doing that, clearly. It’s pulling from something else for reference, some other kind of neural network or language model. The most common critique I hear about it is that it’s “overriding the art” by replacing everything with slop the likes of which you see from other bullshit AI.

      • FishFace@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        I hate videos for information like that. I’d read an article though.

        But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          Generative AI is a name for some ways you can use AI, not for its architecture.

          There’s space to discuss if DLSS < 5 is it or not. But your argument is baseless.

          • FishFace@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            The base for it is that it is generating pixels - and entire frames.

            The difference between DLSS 5 and <5 seems quantitative, not qualitative.

        • FiniteBanjo@feddit.online
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 hours ago

          It’s a very visual topic so using a visual medium to learn about it is ideal.

          Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.

          • ZombiFrancis@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            8 hours ago

            The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.

            Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.

            • FiniteBanjo@feddit.online
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.