Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

  • wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    If Microsoft cared about privacy then they wouldn’t have made windows practically spyware. Even if they install AI locally in the OS, it’s still proprietary software that constantly sends data back to the mothership, consuming your electricity and RAM to do so. Linux has so many options, there’s really no reason not to switch.

    Small LLMs already exist for local self-hosting, and there are open-source options which won’t steal your data and turn you into a product.

    https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/

    Bear in mind that the number of parameters your system can handle is limited by how much memory is available, and using a quantized version can increase the number of parameters you can handle with the same amount of memory.

    Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up. But I’m no expert, so do some research and calculate for yourself what your system can handle.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

      Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

      I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

      EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.