• neon_nova@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks! I figured it’s low on ram, but with the way things are going in the world, maybe it’s better than nothing is what I’m thinking.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        It’s entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b

        So, it’s entirely possible people will find ways to optimize this stuff even further this year or the next, and we’ll get an even smaller model that’s more capable.