A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.


Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.
When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
There’s also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.
And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.
All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.
LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.
Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.
Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.
If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.
And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.
Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).
My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.
Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.