No point getting upset about this, it’s inevitable. So many FOSS programmers work thanklessly for hours and now there’s some tool to take loads of that work away, of course they’re going to use it. I know loads of people complain about it but used responsibly it can take care of so much of the mundane work. I used to spend 10% of my time writing code then 90% debugging it. If I do that 10% then give it to Claude to go over I find it just works.
That’s like the most incredibly hard part of all of this. Everything is aligned so that you don’t use it responsibly. And it’s really hard to guard against this.
Just a few days ago, I was pairing with a coworker and he was using Claude to do a bunch of stuff. He didn’t check any of it. I thought he was gonna check stuff before pushing stuff… And nope! I said, “Wait, shouldn’t we review the changes to make sure they’re correct?” And he said, “Nah, it’s probably fine. I trust it. Plus, even if it’s wrong, we’ll just blame the AI and we can just fix it later.”
…
Yes, checking the work would have negated all of the “time saved” and he was being a lazy fuck.
People who don’t like coding or engineering use this and they are not interested in using this responsibly.
That’s valid for workers in a capitalist system or for capitalists trying to scam people. But why would someone sign their real name to unchecked AI slop for an open source project? It would risk ruining their reputation for little personal gain.
No point getting upset about this, it’s inevitable. So many FOSS programmers work thanklessly for hours and now there’s some tool to take loads of that work away, of course they’re going to use it. I know loads of people complain about it but used responsibly it can take care of so much of the mundane work. I used to spend 10% of my time writing code then 90% debugging it. If I do that 10% then give it to Claude to go over I find it just works.
That’s like the most incredibly hard part of all of this. Everything is aligned so that you don’t use it responsibly. And it’s really hard to guard against this.
Just a few days ago, I was pairing with a coworker and he was using Claude to do a bunch of stuff. He didn’t check any of it. I thought he was gonna check stuff before pushing stuff… And nope! I said, “Wait, shouldn’t we review the changes to make sure they’re correct?” And he said, “Nah, it’s probably fine. I trust it. Plus, even if it’s wrong, we’ll just blame the AI and we can just fix it later.”
…
Yes, checking the work would have negated all of the “time saved” and he was being a lazy fuck.
People who don’t like coding or engineering use this and they are not interested in using this responsibly.
That’s valid for workers in a capitalist system or for capitalists trying to scam people. But why would someone sign their real name to unchecked AI slop for an open source project? It would risk ruining their reputation for little personal gain.
Skill issue
(Edited to add context)
Time issue
Whatever it is, it doesn’t mean LLMs are a sane or “inevitable” answer.
How is it time issue if you have percentages?
I mean oss is not getting the support they need and have to keep up with security, bug, and features so using LLMs to speed up development will help.
This is a bad take, which dismisses the amount of labor involved in maintaining widely used software projects.
I was referring (mostly jokingly) to his spending 90% of his time debugging. But you do you.