Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some…
Vibe coding is a black hole. I’ve had some colleagues try and pass stuff off.
What I’m learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don’t create the code? You don’t develop the understanding. Without the understanding, there is nothing.
Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.
TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.
Vibe coding is a black hole. I’ve had some colleagues try and pass stuff off.
What I’m learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don’t create the code? You don’t develop the understanding. Without the understanding, there is nothing.
Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.
TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
_ /\ _
A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary or only
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.