• cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    42
    ·
    7 days ago

    That’s a big reduction in VRAM usage, but how much processing power does it take to decompress the textures? It’s worthless if it tanks the frame rate. The lower end GPUs that would benefit the most from this aren’t going to have a lot of processing power to spare.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      7 days ago

      You’ve gotta look at the magnitude of difference here, to use a toy example:

      Let’s use a memory data rate of 1GB/s for simple maths

      8GB of uncompressed textures getting moved would take us about 8 seconds

      The compressed stuff at <1GB is going to be done in less than 1 second.

      As long as the decompression process doesn’t take another 7 seconds to complete it’s going to be more performant

      Edit: typo

        • 9point6@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          to use a toy example […] for simple maths

          I know, I just picked easy numbers for the sake of discussion. The actual data rate is not important to this particular discussion.

      • 3rdXthecharm@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Thanks for the explanation!

        Would you happen to have information about any loss from compression or is that kind of thing negligible with that much time for it to unpack?

        That would just be my only (uninformed) concern. I already fear we’re going too deep in an era of ‘fake’ things, fake frames, fake 4k, fake lighting through strobing to induce less blur for moving objects (that monitor test was sick, but also I fear eye exhaustion will be a thing). My sibling has a card capable of utilizing new frame gen, and that doesn’t look as bad, but it’s still not visually equal to raw same framerate in terms of clarity for me.

        • 9point6@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          No idea on the loss side of things tbh, though given it’s AI based, I’m assuming it can’t be truly lossless

  • rogsson@piefed.social
    link
    fedilink
    English
    arrow-up
    24
    ·
    7 days ago

    I really hope the competition steps up their game soon… this monopoly is cancer and nvidia gets better at tech and shittier as a company by the second. At this point only DLSS 5 seems to scratch on their invincibility cloak. Jensen needs to be brought down to earth.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 days ago

      Problem is that even when the competition is ahead, people continue to buy Nvidia.
      Texture compression is nothing new, and back in the day ATI was ahead of Nvidia on this.

      • fallaciousBasis@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 days ago

        Not really. They went back and forth, though. I’d argue Nvidia has generally had more efficient and aggressive memory compression(not just textures!)

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          You obviously isn’t aware of the sales statistics, AMD may win a tiny bit of marketshare when they are in the lead against Nvidia, but even when AMD was the clearly better cards and value, Nvidia maintained the bigger market share.

  • MolochHorridus@piefed.social
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    7 days ago

    Yet another AI enshittification on top of visual artists hard work.

    “NTC drastically reduces VRAM usage by emulating textures, allowing for either much lower VRAM consumption or significantly enhanced material appearance, depending on the game developer’s goals.”

    We are just emulating textures, trust me bro.

          • MolochHorridus@piefed.social
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            7 days ago

            I guess you didn’t bother to read the article.

            “This process is so refined that the output can either provide a more realistic version on top of the base texture layer that the game uses or maintain the same texture appearance for significant VRAM savings.”

            What is more “realistic version on top of the base texture layer” if not AI enshittification? It either compresses the textures, when has that ever maintained the original quality, or layers AI layer of “better” textures on top of the original ones. Shit vs shit.

            • tekato@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              6 days ago

              If you look at the NVIDIA presentation (linked in the article), they explain how they quantify realism at around minute 16. Also audio and video compression doesn’t aim for original quality, it aims for lower quality that is visually or acoustically“the same” from the human perspective, not the computer. Just because it’s AI doesn’t mean it’s garbage.

              • MolochHorridus@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 days ago

                So no original quality in any option, only worse or AI “enhanced” as I said: enshittified. They’re trying to push the “enhanced” shit by lowering the quality of the original vision.

                • tekato@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  6 days ago

                  The original quality will be there when you turn this feature off. It’s safe to assume this will just be part of a future DLSS version

  • Mark with a Z@suppo.fi
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 days ago

    So, nvidia, a gpu maker, invents a technology that reduces memory needs in exchange for more gpu needs. Bonus points for AI.

    • fallaciousBasis@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      7 days ago

      Nvidia has always had strong real time hardware accelerated memory compression.

      Compute is basically a free lunch compared to memory bottlenecks. And individual textures will probably fit in low level caches, which allows the compute to flex.

        • fallaciousBasis@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 days ago

          Compute units are significant, but so are the caches and i/o.

          Real world datasets tend to have a lot of sparsity.

          One of the biggest problems is called a page fault. Which is basically when the app needs to go to storage to find data to continue execution. This results in the processor waiting, which isn’t free.

          Generally, I’d say they go hand in hand about 50/50 plus or minus 10%.

          One benchmarks can fit in L1 cache and really stress the cores but most benchmarks you’re using all the levels of cache, ram, IO, etc… which is a hell of a lot compared to just a little bit of processor in that chip. GPUs again are often just massive collaborations of massive throughout and compute. So it can be to really separate.

          And legit. Most data is compressed these days on the Internet. So that’s again compute used to save data in flight. It’s a neverending tradeoff.

  • Seefra 1@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 days ago

    Novidia will do anything but bump the vram on consumer models. CG artists be dammed.

    • Rekall Incorporated@piefed.socialOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      This is more like MP3 or H264, a compression algorithm that strives to be perceptually lossless mid to high end.

      For much of mainstream music, on a vast majority of speaker/headphones it’s difficult to tell a 256 KB MP3 and a FLAC.

      This is sort of similar. Just read up on it.

    • Captain_Stupid@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      That is not how V-Ram works. Also the compression could probably also be used in the file so you would use less memory and less VRAM.