“Well, first of all, they’re completely wrong,” Huang said in response to a question from Tom’s Hardware editor-in-chief Paul Alcorn about the criticism.

“The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI,” Huang continued.

Just a elongated way to say AI slop.

  • nightlily@leminal.space
    link
    fedilink
    English
    arrow-up
    50
    ·
    23 hours ago

    So this dumb fuck’s own marketing material has said this operates off final pixel colour and motion vectors (for temporal stability presumably) - that says to me that it’s not working with actual geometry info at all. It probably has a step to infer geometry but it’s still just a fancy Instagram filter working with limited data and an obviously ill-suited training set.

    • lb_o@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Oh, thanks for pointing that out.

      Ignoring that current version looks sloppy, as a gamedev I would accept extra AI beautification post processing step as additional feature, but I would never accept corporation getting their hands into my beloved geometry.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      6
      ·
      22 hours ago

      the previous versions at least need the software to supply motion vectors. otherwise it’s just guesswork. i’m assuming there will be some way to supply lighting information as well.

      whatever the final product can do, they certainly didn’t show it off in their examples.

      • orgrinrt@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        21 hours ago

        Technically, at least on vulkan, these things can be inferred or intercepted with just an injected layer, though it’s not trivial. If you store a buffer history for depth you can fairly accurately compute an approximation of actual (isolated) mesh surfaces from the pov of the view. But that isn’t the same as real polygons and meshes that the textures and all map to… pretty sure you can’t run that pipeline real time even with tiled temporal ss. Almost definitely works on the output directly, perhaps some buffers like motion vectors and depth for the same frame that they’ve needed since dlss2 anyway. But pretty suspect to claim full polygons, unless running with tight integration from the game itself, even then the frame budgets are crazy tight as it is, nevermind running extra passes on that level

        • SkunkWorkz@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          20 hours ago

          Probably not meshes since it is way too expensive. But these guys write the GPU drivers, so they of course have access to the different frame buffers and textures buffers and light source data. So just from depth and normal map data you can get a good representation of geometry. Like deferred rendering lights the scene with the data in the G-Buffer, which is 2D, not geometry.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      That’s what he’s saying. That it doesn’t change the geometry or textures (still completely controlled by the devs) and that the parts that it does change are also tunable by the devs.

      He’s responding to the backlash about how it changes models/textures (which it doesn’t) by saying those are still fully in the hands of the devs and the parts people are seeing in the demos can be fine tuned by the dev teams to match their vision for what they want it to do or not do (like change lighting on material surfaces and hair but not character faces as an example).

      • nightlily@leminal.space
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        It’s a post-processing screen space effect. At that point, there’s zero control the game can have over the geometry. If the AI model wants to change it, it can. It fundamentally can’t only operate on lighting like the marketing claims, it can only make a hallucinating best-effort statistical guess at what the geometry in the final image should be.