China's domestically developed open-source large language models have recorded more than 10 billion cumulative downloads worldwide, and the country now holds
It’s entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b
So, it’s entirely possible people will find ways to optimize this stuff even further this year or the next, and we’ll get an even smaller model that’s more capable.
I’ve stopped bothering using an editor with LLMs. I just get the model to make a phased plan, write using TDD, and tell it to do staged commits for each feature. Then I just review the diffs after.
16gb is a bit low unfortunately. You could run a 2 bit quant of latest Qwen, but that’s going to be a severely degraded performance. https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
Might be worth trying though to see if it does what you need.
Thanks! I figured it’s low on ram, but with the way things are going in the world, maybe it’s better than nothing is what I’m thinking.
It’s entirely possible we might see fairly capable models that can be run with 16 gigs of RAM in the near future. Qwen 3.5 came out in February, and you needed a server with hundreds of gigs of memory to run a 397bln param model. Fast forward to a couple of weeks ago and 3.6 comes out with a 27bln param version beating the old 397bln param one in every way. Just stop and think about how phenomenal that is https://qwen.ai/blog?id=qwen3.6-27b
So, it’s entirely possible people will find ways to optimize this stuff even further this year or the next, and we’ll get an even smaller model that’s more capable.
Thanks! That’s really amazing to hear. I guess I’ll wait a bit and see what happens.
Still worth using Qwen3-Coder-Next 80B? Runs about slightly faster than 3.6 27B on my hw.
I haven’t tried comparing them myself, I guess you just kind of have to gauge if it works well enough. :)
What software are u using with the models for code? OpenCode, Nanocoder, etc.?
I ended up settling on opencode, but I find all of them work more or less the same nowadays. Pi is an interesting one which is very minimalist.
Integration with an editor?
I’ve stopped bothering using an editor with LLMs. I just get the model to make a phased plan, write using TDD, and tell it to do staged commits for each feature. Then I just review the diffs after.