I’ve had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure.
The only issue I’ve had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use.
It didn’t use it and recommended a different library.
When instructed that it must use this specific library, it refused (politely).
That caught me off-guard. I shouldn’t have to create a scenario where the AI goes to jail if it fails to use something.
I should just have to provide the instruction and, if that instruction is reasonable, await output.
I’ve had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure. The only issue I’ve had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use. It didn’t use it and recommended a different library. When instructed that it must use this specific library, it refused (politely). That caught me off-guard. I shouldn’t have to create a scenario where the AI goes to jail if it fails to use something. I should just have to provide the instruction and, if that instruction is reasonable, await output.