• Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    Take good care of your hw! It’s not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      My Steam Deck is my child.

      Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite…

      I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.

      • los0220@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Gemma 4 seems nice for local usage, way faster than Qwen models.

        I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload

        • percent@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 hours ago

          +1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though