Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
But does the software still work?
Here’s the thing. The more you use AI to generate your code, the less likely you are to fully review all of it, understand what it’s doing and be able to fix it when bugs or exploits appear, or even know that they exist. So sure, it might work for now but what about in a couple of years of vibe coding it?
Then… we only use the versions that work. And someone can fork from there.
Would have been easier if the original dev(s) continued to work on it themselves, instead of sloppifying the code.
It’s not a fallacy if the slope actually is slippery.
this is some real 2022 style complaint
most developers are using ai in 2026 in some way, it’s simply too good
Fast =/= good
People malding but its the truth.
You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.
Its so common now its a waste of time to label it, you should just assume AI was involved at this point.
Where I work, the company has a ChatGPT contract that’s used as a coding assistant tool in VS Code and I imagine also for the admin/contract/legal people doing what they do. Every contracting company developer I’ve worked with, their company has some enterprise ChatGPT/Claude/Gemini/etc. I’ve talked to software developers at large companies that raved about what they could do with enterprise Claude and enterprise Cisco AI coding tools
Pretty much everyone I know at the minimum uses the Gemini Google search summary for coding questions/dockerfile/kubernetes/open shift/docker compose/helm/terraform/ansible/bash script/python script/snippets/…
I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.
On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/
In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic–something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can’t follow the “logic” of what was written. You’re just creating more problems for yourself in the future.
To make matters worse, I think that AI is also not at all suitable for “open source” development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.
Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? Finally, who is the author and copyright holder of AI generated code?
Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)
Baseless whinge
It’s slop now despite using AI code for some time?






