“Well, first of all, they’re completely wrong,” Huang said in response to a question from Tom’s Hardware editor-in-chief Paul Alcorn about the criticism.

“The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI,” Huang continued.

Just a elongated way to say AI slop.

  • orgrinrt@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago

    Technically, at least on vulkan, these things can be inferred or intercepted with just an injected layer, though it’s not trivial. If you store a buffer history for depth you can fairly accurately compute an approximation of actual (isolated) mesh surfaces from the pov of the view. But that isn’t the same as real polygons and meshes that the textures and all map to… pretty sure you can’t run that pipeline real time even with tiled temporal ss. Almost definitely works on the output directly, perhaps some buffers like motion vectors and depth for the same frame that they’ve needed since dlss2 anyway. But pretty suspect to claim full polygons, unless running with tight integration from the game itself, even then the frame budgets are crazy tight as it is, nevermind running extra passes on that level

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      Probably not meshes since it is way too expensive. But these guys write the GPU drivers, so they of course have access to the different frame buffers and textures buffers and light source data. So just from depth and normal map data you can get a good representation of geometry. Like deferred rendering lights the scene with the data in the G-Buffer, which is 2D, not geometry.