• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    19 hours ago

    Not a fan of reimagined stuff of any sort, it usually doesn’t hit well. But from a tech standpoint I can think of ways one could use the tech to improve game performance for new games. Usually making a game run faster or feel more realistic is all about fooling the player, not drawing what’s not seeable, showing hints of things that aren’t really there. Hell, that’s been true for movies and even stage, right?

    So my thought on how this could work is to have the actual core models be lower polys, enough for details but not as high as the best we’ve seen done, and minimal texturing. Then the generator uses that as a base to form the image it puts over the top. Still don’t see how that can be done that fast, but apparently we’re there now.

    • NONE@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      19 hours ago

      The problem I see is consistency. Whatever the AI generates for a given source won’t be consistent throughout the game. Even in the original Digital Foundry video, you can see how Grace’s face looks like a totally different person depending on the distance from which it’s viewed.

      The artistic style is supposed to solve that consistency issue, but this AI is ruining it.

      (Also, in the same video, you can see they were using two 5090s to run the DLSS 5 games, so…)