• 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    7 days ago

    You’ve gotta look at the magnitude of difference here, to use a toy example:

    Let’s use a memory data rate of 1GB/s for simple maths

    8GB of uncompressed textures getting moved would take us about 8 seconds

    The compressed stuff at <1GB is going to be done in less than 1 second.

    As long as the decompression process doesn’t take another 7 seconds to complete it’s going to be more performant

    Edit: typo

      • 9point6@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        to use a toy example […] for simple maths

        I know, I just picked easy numbers for the sake of discussion. The actual data rate is not important to this particular discussion.

    • 3rdXthecharm@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Thanks for the explanation!

      Would you happen to have information about any loss from compression or is that kind of thing negligible with that much time for it to unpack?

      That would just be my only (uninformed) concern. I already fear we’re going too deep in an era of ‘fake’ things, fake frames, fake 4k, fake lighting through strobing to induce less blur for moving objects (that monitor test was sick, but also I fear eye exhaustion will be a thing). My sibling has a card capable of utilizing new frame gen, and that doesn’t look as bad, but it’s still not visually equal to raw same framerate in terms of clarity for me.

      • 9point6@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        No idea on the loss side of things tbh, though given it’s AI based, I’m assuming it can’t be truly lossless