• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    ·
    21 hours ago

    I don’t see how this will stay consistent enough for art directors to sign off on it. It’s effectively just a hallucination based on your current video game frame.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      22
      ·
      21 hours ago

      Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.

      Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.

      • knightly the Sneptaur@pawb.social
        link
        fedilink
        English
        arrow-up
        31
        ·
        edit-2
        19 hours ago

        What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details?

        It’s not, DLSS5 takes a frame as rendered normally by your GPU and feeds it into a second $3k GPU to run the AI image transformer.

        There is no performance benefit, in fact it adds a bit of latency to the process.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          17 hours ago

          And Nvidia claims it will release without the need for a second 50 series card this year.

          Lots of bullshit being laid out by Nvidia here.

      • Eggymatrix@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.

        I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.

        My two cents

        • egregiousRac@piefed.social
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          19 hours ago

          Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.

          • paraphrand@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            17 hours ago

            I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.

            I expect a lot of nonsense being hallucinated in those areas.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          20 hours ago

          I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?

          • hobovision@mander.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            Generative AI is a decent catch-all that I think would apply to this.

            Another good option is “machine learning” or ML, but that’s fallen out of favor cause it doesn’t sound as impressive as AI. But really it’s teaching a machine to do a specific task. It’s not intelligent, it’s just that we don’t understand how it learns.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            19 hours ago

            Neural network would be the most technically accurate given what they’ve announced so far.

            There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.

            https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/

            • REDACTED@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Saw somewhere a mention of it doing only 1 pass. Stable diffusion takes 30-100+ passes, so this sounds like fast inpainting rather than actual generation