• X@piefed.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 days ago

      They could say “the connection is probably lost,” but it’s more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.

    • SorryQuick@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      But really it’s just how it will always be. How do you estimate transfer speed? Use the disk speed / bandwidth limit? Can’t do that since it’s shared with other users/processes. So at the beginning there is literally zero info to go off of. Some amount of per-file overhead also has to be accounted for since copying one 100gb file is not the same as copying millions of tiny files adding up to 100gb.

      Then you start creating an average from the transfer so far, but with a weighted average algorithm, since recent speeds are much more valued, but also not too valued. Just because you are ultra slow now doesn’t mean it will always be slow. Maybe your brother is downloading porn and will hog the bandwidth all day, or he’ll be done in a few seconds.

      So to put it simply, predicting transfer time is pretty much the same as predicting the future.

      • tetris11@feddit.uk
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        I like rsync’s progress: speed and files left

        I detest the needless line chart windows 10 had

      • Eheran@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Transfer speed on discs was and is almost exclusively a matter of file size, so it should be easy to estimate a much better time than the dumb “total bytes / current speed” that constantly fluctuates since file sizes are not all identical.

        • SorryQuick@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          5 hours ago

          That’s so wrong. It always fluctuates because the speed itself always fluctuates. It’s only easy when you know it doesn’t fluctuate because you’re not using the computer at the same time.

          • Eheran@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 hours ago

            Since file size is not taken into account, it fluctuates wildly even if you don’t do anything other than transferring those files.

              • Eheran@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 hours ago

                Since when is that so? Or where? W11 essentially “pauses” when there are lots of small files after bigger ones with >1000 MiB/s since those only reach perhaps 100 MiB/s.

                • SorryQuick@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 hour ago

                  Since forever. I can’t say for windows since I haven’t used it in forever but almost all sensible algorithms take it in consideration. There are also many factors, such as what filesystem (ext4…) you use. You can’t account for them all. Usually you simply add a small “overhead” constant per file, so smaller files get that many times while big ones only get it once.

        • Natanael@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          On disc you have read/write misses and seeks, and due to constant RPM + geometry the read/write the speed literally varies with the physical distance of the written data from the center of the cylinder (more dots per arcsecond at the outer edge)