• Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 hours ago

    Been using Qwen 3.x for a while now for local LLM with search capability. The 3.5 and 3.6 ones are great and run very fast.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      19 minutes ago

      I got Qwen 3.5 running on a Steam Deck.

      It ain’t exactly blazing fast, but it does actually work.

      (Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)

      Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.

      Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.