Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
He might have had a leg to stand on here if this was an AI that he had trained himself on ethically-sourced data, but personally I don’t want to be lectured by anyone about “our current capitalist culture” who is intentionally playing right into it by financially supporting the companies at the center of the AI bubble. The very corporations that are known to have scraped countless terabytes of unlicensed data for their own for-profit exploitation, by the way.
If you discard your self-proclaimed values the second that it becomes convenient or “valuable”, you never had any values to begin with.
Practice what you preach, or don’t preach at all.
But does the software still work?
Here’s the thing. The more you use AI to generate your code, the less likely you are to fully review all of it, understand what it’s doing and be able to fix it when bugs or exploits appear, or even know that they exist. So sure, it might work for now but what about in a couple of years of vibe coding it?
Then… we only use the versions that work. And someone can fork from there.
Would have been easier if the original dev(s) continued to work on it themselves, instead of sloppifying the code.
Technical debt is a very real thing that has been around for a long time and is well documented.
AI code is not old enough for the technical debt to have really hit hard yet.
It’s not a fallacy if the slope actually is slippery.
Except, of course, that this not an imagined or even unlikely outcome; so, no, by definition your link fits not apply.
Maybe read what you link???
welp, another project off my list.
It was handy as in it enabled me to not require opening EGS, but I haven’t been using EGS lately anyway.
It’s easier to just stop using it rather than have to write a Firejail profile for it.Baseless whinge
It’s slop now despite using AI code for some time?
It has been slop for a while.
Just took a while to realise.
this is some real 2022 style complaint
most developers are using ai in 2026 in some way, it’s simply too good
“it’s simply too good”
Tell that to code reviews I’ve been rejecting because strong disagree. People are using it because they swallowed the snake oil, doesn’t mean we can’t keep fighting against it.
No we’re not.
Fast =/= good
People malding but its the truth.
You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.
Its so common now its a waste of time to label it, you should just assume AI was involved at this point.
Where I work, the company has a ChatGPT contract that’s used as a coding assistant tool in VS Code and I imagine also for the admin/contract/legal people doing what they do. Every contracting company developer I’ve worked with, their company has some enterprise ChatGPT/Claude/Gemini/etc. I’ve talked to software developers at large companies that raved about what they could do with enterprise Claude and enterprise Cisco AI coding tools
Pretty much everyone I know at the minimum uses the Gemini Google search summary for coding questions/dockerfile/kubernetes/open shift/docker compose/helm/terraform/ansible/bash script/python script/snippets/…
It seems like the only people who actually derive value out of it are software developers or middle managers. Every other professional discipline has liability and a need to verify accuracy before actioning something. So beyond reading the AI generated summary on a search engine for non critical things it’s basically useless.
I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.
On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/
In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic–something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can’t follow the “logic” of what was written. You’re just creating more problems for yourself in the future.
Personification of probability doesn’t do us any good, open source projects require thoughtful contributions from thinking entities.
To make matters worse, I think that AI is also not at all suitable for “open source” development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.
Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? How do you know that the code produced by your LLM is legally safe and not ripped verbatim from someone else’s scraped proprietary codebase? Finally, who is the author and copyright holder of AI generated code?
Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)
A number of weeks ago I noted that one app I use through Lutris for the various settings needed had stopped loading. So I did a lot of looking around to figure out what was going on (didn’t know it was Lutris, I searched mention of issues with the app, with Wine, etc.)
Finally ran across a bug report in the Lutris github that sounded like my problem. And part of it was how slow some updates filter out, so I ended up doing an uninstall from the manager and manually forcing an update. And all is good now.
I wonder if the bug was AI driven (don’t even recall what it was, it was a small update that broke things for some people).
Great to know I should probably expect more fires later. I probably need to see if I can make this app run on my own in Wine. A shame, it’s working fine as is. But I need to be ready.







