I didn’t ask for this and neither did you. I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off o…
Writing code with an LLM is often actually less productive than writing without.
Sure for some small tasks it might poop out an answer real quick and it may look like something that’s good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it’s very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.
Then there’s all the cases where the LLM messes up and doesn’t give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it’s very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like “Summarizing conversation”, which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.
There’s also the issue where an LLM simply can’t do a large complex task. They’ve tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.
The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn’t know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.
There’s also so much more to being a good software developer than just writing the code. The LLM can’t do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that’s a muscle that needs flexing, or skills rust and go away.
And now they’ve poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone down. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.
People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.
It helps me code faster, but I really only outsource boilerplate to an LLM. I will say it also helps with learning the syntax for libraries I’m unfamiliar with just in that I don’t have to go through several pages of documentation to get the answers I need in the moment. The speed-up is modest and nowhere near the claims of vibe coders.
I always wonder this as well… I will use tools to help me write some repetitive stuff periodically. Most often I’ll use a regex replace but occasionally I’ll write a little perl or sed or awk. I suspect the boilerplate these people talk about are either this it setting up projects, which I think there are also better tools for
I’ve been writing Java lately (not my choice), which has boilerplate, but it’s never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.
Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like… tools have existed[1] for doing that before LLMs, and they’re deterministic, and are computationally free compared to neural nets.
Writing code with an LLM is often actually less productive than writing without.
Sure for some small tasks it might poop out an answer real quick and it may look like something that’s good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it’s very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.
Then there’s all the cases where the LLM messes up and doesn’t give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it’s very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like “Summarizing conversation”, which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.
There’s also the issue where an LLM simply can’t do a large complex task. They’ve tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.
The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn’t know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.
There’s also so much more to being a good software developer than just writing the code. The LLM can’t do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that’s a muscle that needs flexing, or skills rust and go away.
And now they’ve poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone down. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.
It helps me code faster, but I really only outsource boilerplate to an LLM. I will say it also helps with learning the syntax for libraries I’m unfamiliar with just in that I don’t have to go through several pages of documentation to get the answers I need in the moment. The speed-up is modest and nowhere near the claims of vibe coders.
Because this comes up so often, I have to ask, specifically what kind of boilerplate? Examples would be great.
I always wonder this as well… I will use tools to help me write some repetitive stuff periodically. Most often I’ll use a regex replace but occasionally I’ll write a little perl or sed or awk. I suspect the boilerplate these people talk about are either this it setting up projects, which I think there are also better tools for
My experience as well.
I’ve been writing Java lately (not my choice), which has boilerplate, but it’s never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.
Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like… tools have existed[1] for doing that before LLMs, and they’re deterministic, and are computationally free compared to neural nets.
e.g. https://transform.tools/json-to-java ↩︎