People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.
the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.
the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?
having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.
People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.
I’d buy you a beer for that summary. That is exactly SPOT ON.
the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.
the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?
having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.