A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.


If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.
Yeah, that’s my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.
But that was pretty much always true. We still did not slap another implementation onto the side, because it’s horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
And it’s horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.
And the worst part is that they don’t even have an answer to those concerns. They know that it’s going to bite us into the ass in the near future. They’re on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.
And when it comes to actually maintaining that generated code, they’ll be the hardest to motivate, because that isn’t as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don’t know any better how it actually works. Nevermind that they’re also less sharp in general, because they’ve outsourced thinking.
Hell most people turn off their brains when the word gets mentioned at all. There’s plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.
As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.
The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it’s not fit for.
Lutris doesn’t have that problem.
So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it’s likely to improve as he off loads tedious small things to his more efficient tools.
The problem is I’ve seen people who supposedly have a brain start to use a high and over time they become increasingly confident in the AI’s abilities. Then they stop bothering to review the code.
@gruk iz dis tru?