Jyk0L8eLs7jd7es.png

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.

I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

  • jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    They do build a representation of words and sequences of words and use that representation to predict what should come next.

    A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:

    The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.

    • LurkingLuddite@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      They don’t “learn” anything, though. They’re ‘trained’ (still a bad term but at least the industry uses it) to spit the correct answer out.

      People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not “know”. They have statistically derrived association and that’s it. That’s all.

      Holy hell ELIZA effect is in full swing and it’s beyond sad. They don’t build the association themselves. They don’t know what the representations mean. They absolutely do not know why two words are strongly associated. It’s just a bunch of math that computes a path through that precomputed vector space. That’s it.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I didn’t use the word learn, although that’s really just a matter of semantics. I said they build a representation of words/sequences in a vector space to understand the interplay of words.

        You can down vote me all you want, but that’s literally just the math that’s happening behind the scene. Whether any of that approaches something called “learning”, probably not, but I’m not a neruoscientist.