• theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      12 hours ago

      You’ve proved my point that you don’t know what you’re talking about by blindly linking to the git repo. Couldn’t find any source that supports your claim? I wonder why.

      Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You’d need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

      • ag10n@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        Thank you for proving my point. It can be run on a cpu

        “It’s slow, it’s inefficient” it still runs

        It’s a foundational model just like R1 was.

          • ag10n@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 hours ago

            Quote me in full.

            You can run it at scale, on huawei. You can also run it on a cpu

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 hours ago

              Quote me in full.

              Okay!

              You can run at scale, on huawei. You can also run it on a cpu

              Yeah, that is absolutely not what you argued.

              Anyway, you’ve conceded that I’m correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That’s good enough for me!

              • Diurnambule@jlai.lu
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.

              • ag10n@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 hours ago

                Your interpretation of the English language has won you an argument! Huzzah

                So good of you to concede it runs on cpu