I’m kind of torn on this, because on the one side I can see the developer’s troubles. If they have 30 years of experience and they considered the impact of using it they will most likely know how to use it properly and ethically. Indeed many of the issues people have with AI are a kind of redirected anger, when really they are issues with capitalism, incompetency, or digital illiteracy. And the person posting the issue seems purely there to fan that flame rather than actually contribute. Something maintainers could use just as little as slop authored PRs.
But on the other hand, being open about the usage is a must. It’s the price to pay for going against the grain. If your ideals and means are pure, they should be defendable and scrutinizable to reasonable people, and there should be no issue with that in the long term. Hiding the usage will create doubt about authorship, and make defenses harder to point at, while it won’t stop the horde.
Yeah what rubs me wrong is that they went out of their way to hide it and are proud of it
“This works perfectly, which is why I’m removing all ways to audit what it has contributed.”
“because that’s the only way to use it without being harassed online”
I disagree with his reasons for removing it, but they are pretty clear.
The downvotes are only making your argument for you, lol
Downvotes are pretty much death threats, amirite?
Lovely.
I haven’t been able to get the Elder Scrolls Online (ESO) to run under Steam lately. I was able to get it running under Lutris, and it was fine until the 5.20 update. Haven’t been able to play at all. It was good while it lasted, I guess. Time to look for a new solution. If anybody has any recs, I’d love to hear them. I’m running Linux Mint 22.3.you should be able to just move the prefix into the folder managed by bottles or heroic, or add the existing folder location to either of those. All of them just present visuals on top of wine/proton. Might also have to manually download and set the wine/proton version from whichever apps built in management.
Bottles is also ran by an absolute shitty person who’s slowly going insane.
Frankly I would avoid bottles even more then lutris and if your going to avoid both just use herotic.
Bottles works much in the same way, and I always prefered it to Lutris. It’s also pretty easy to use plain old Wine if you’re comfy at all in the terminal. Pair it with winetricks and you can run most games with little hastle
is there a website like lutris has with all the install scripts others have made?
You run the game’s actual installer exe file in the bottle to install the game. No installation script needed.
Why cant you revert back to a previous version?
The system package for Lutris is version 5.14… Hmmm…
Fair response to someone opening an issue just to broadly complain about AI imo
Whether or not I use Claude is not going to change society
This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)
I can’t fix the problem, therefore I’ll be part of the problem.
Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.
it is another thing to act with disrespect to everyone with a different opinion.
Unless that opinion is ‘I like using AI’, then they deserved the disrespect.
Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.
I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.
They are free to do what they want to on their repo.
We are free to fork if need arises.
Personally I don’t like projects not showing what AI has made. And most of Claude was made on stolen code. Its against the open source license they themselves use https://github.com/lutris/lutris/blob/master/LICENSE
But almost no one actually enforces the license until the big companies show up. I hope they change their minds, but until then, im going to stop using/contributing for a while.
Does anyone know which was the last version before the dev started shoveling slop in to the repo? The utter dipshit invalidated even the ability to license after that point, those releases are wholly worthless.
in 5 years from now there’s going to be totally coevolved but unique seed-lines for software. the once with AI, and the once without. how can you distinguish them? did the human that said it wrote them really write them? these problems aside, i suspect it will be forced to happen just from a security viewpoint, big companies won’t be able to get any kind of insurance anymore running AI-infested code.
That last bit needs to hit sooner.
it’s more nuanced than that. Claude is made from stolen code, but it generally isn’t going to copy its training data verbatim (unless specifically told to). so copyright wise it’s more grey than strictly wrong. and though claude is made from stolen code, lutris developers are writing something they give off freely to the world, they are not profiting from the stolen code.
does this make it ok? i don’t know. what if they use an open weights model rather than a closed one? would that be more acceptable?
I’m now assuming it all is and deleting Lutris.
What a moron.
Oh yeah. Here’s another nugget:
Sometimes, I generate some code with Claude and commit by hand
Sometimes, I write code manually and ask Claude to commit
Sometimes, I ask OpenClaw to generate some code, which doesn’t put the Co-Authorship
Sometimes, the whole thing is AI generated from end to end
This is also a somewhat recent addition to Claude Code. I was kinda surprised when I first noticed it but didn’t think much of it, I was like “meh, I guess we’re doing that now, whatever, some people might take issue with it, whatever”. Also, do keep in mind that I love trolling people coming in my projects to complain about my methods.
For those who are anti-AI, it’s a safe assumption that any addition to the project has had some kind of AI interaction during the development process.
https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355
Now I’m really worried this software can wipe out my home directory
Sometimes, I ask OpenClaw to…
This person should not be trusted with anything.
That is the real shame in all this. I’m certainly not updating lutris any more, because there is no way of knowing what you will install on your system.
You can trust humans (as in “trusting is an option”). You can never trust an LLM. And admitting that there might be unsupervised commits, being installed on possibly thousands of PCs is terrifying.
Glad I use Heroic instead. Time to check what their AI policy is.
Based on some PRs, they’re using github copilot to help with reviews but are generally against vibe coding
Is this the same Lutris maintainer who took it out of the mint repos because he didn’t like some minor thing they did?
I wouldn’t be shocked if that kinda thing happened lol (or if Mathieu at least tried), but why would Lutris be in the Mint repos anyway?
They’re pretty small afaik, with the Ubuntu and Debian repos being used for non-Mint specific things
All I could find was that Lutris “dropped support” for Mint a number of years ago, whatever that means, but Mint is now displayed alongside Ubuntu & ElementaryOS on the downloads page anyways
Here’s my issue with this specifically. It makes Lutris very vulnerable to being considered entirely public domain:
There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.
As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.
In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.
Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.
You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.
That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.
Also, that’s potentially copyrightable. That hasn’t been settled.
deleted by creator
Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.
TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.
Quoting further:
Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.
For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]
Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.
Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.
If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.
Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.
OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.
Glad it applies worldwide /s
Slop can’t be copyrighted, great. We don’t want slop.
“AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.
I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.
I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.
If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.
Lutris has been shit for months now - I guess I just figured out why.
Just assume everything is AI generated and feel free to ignore the GPLv3 because generated code doesn’t have any copyright. See how he reacts.
That’s not how this works.
The legal effect of AI generated code on software licenses is untested in court and AFAIK has no explicit laws. So really no one knows how it will work yet.
The US Copyright Office has updated its guidelines:
If AI content is present, the Office will only register the work if the human contributions are sufficiently creative and if the AI-generated portions are supplementary or used as a tool under human direction. Essentially, they ask: “Is the work basically one of human authorship, with the computer merely assisting?” If yes, it can be protected (with a disclaimer that some content isn’t human-made). If no, if the AI’s role overshadows the human’s, then the work, or at least the AI-created portion, is not eligible for copyright.
In Canada, where I live:
So, can you claim copyright in an AI-generated work in Canada? As of 2025, the safest answer is: only if a human author contributed substantial creative effort to the final work. There needs to be some human “skill and judgment” or creative spark for a work to be protected.
If the AI was just a tool in your hands, for instance, you used AI to enhance or assemble content that you guided then your contributions are protected and you are the author of the overall work. But if an AI truly created the material with you providing little more than a prompt or idea, the law may treat that output as having no human author, and thus no copyright.
For now, anyone using AI in creative projects should keep documentation of their own input and creative choices. Emphasize the parts of the work where you exercised judgment or selected elements because those are likely what copyright will cover. And remember that copyright in AI-generated content is a fast-moving area.
https://www.foundationsoflaw.com/post/can-you-claim-copyright-in-ai-generated-works-in-canada
Makes sense to me.
The thing is, many of these guidelines are related to finalized products fully created by AI. As in, the AI produced a written or drawn work at the end of it that on it’s own is the product (Eg. an article or an image). This will probably apply to code in some reasonable way, but at the end of the day there’s only so many ways to write code since it’s syntax and not as flexible as language. It actually has to produce something that works, so there are far less finite arrangements.
If you were to compare code written by two people at two companies, doing a very similar project, you wouldn’t be surprised to find two pieces of code doing almost the same thing in the same syntax, barring synthetic sugar like naming and coding conventions. Neither will likely have violated the other’s copyright since simultaneous invention is a thing. And if they happened to have similar prior experiences, it’s even more likely.
Likewise, the way the code was incorporated into a project as a whole might sufficiently constitute a human contribution, and perhaps even the more important contribution. You likely wouldn’t retain the copyright on the specific snippet, but rarely are small code snippets enough on their own to claim copyright over to begin with. It’s the program or library or system as a whole that’s the finished product.
Just assume everything is AI generated
This is the part that will definitely not work.
It is now, strycore made it happen.
Holy purity test I think people in this thread are slightly over reacting.
Sometimes, I ask OpenClaw to generate some code
https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355
OpenClaw is extremely vulnerable to prompt injection. If the maintainer is using it to author code, you absolutely can not trust that the code is safe from exploits obfuscated as unintentional logic errors or bugs.
There’s purity testing, and then there’s being cautious about running code made by someone who is doing something incredibly stupid and unsafe. This is the latter.
You are assuming the author is being unsafe & not auditing code for very basic security issues.
Let me present this angle, small teams of volunteer open source developers finally have a way to help ease the amount of code they produce, but you want them to continue doing all the work manually because AI hurts your feefees.
Further, you are openly declaring you don’t trust the devs to audit their own code.
If you can find a security vulnerability in the code (it is open source after all) I’ll cede, but otherwise, I think it is a good thing responsible AI use can help shoulder the work these folks do for our benefit.
It looks like the issue submitter is trolling a number of projects on their personal anti-AI crusade. I would take it more seriously if they had reviewed any of the PRs and identified issues with them.
Yes AI slop is an issue (especially for maintainers) but it can still be a useful tool. If the maintainers want to use AI on their own code it should be their choice. Most forks fail because the righteous feeling of finally getting your own way on a repo you control usually falls away as you realise the people actually doing the work didn’t follow you.
Honestly the need for Lutris has gone way way down in the last couple years. I don’t know about forking it, but I think it’d be pretty easy to just avoid it. Less because there’s any concrete issues that I could point out, but more as a political statement and loss of confidence.
This explains why it would break constantly… But that’s also why people moved to other solutions.
Meh, I don’t really care. It’s a free product and it does what I need it to. Just open an issue if there’s actually something wrong with the code itself or pick another software if you disagree with the maintainer. There’s really no need for drama here.
It’s more of a political stance.
For a good example check out Asahi Linux: https://asahilinux.org/docs/project/policies/slop/
It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.
The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the “help” of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.
Common Asahi Linux W
That’s why we cannot have nice things.
People on the internet going nuclear about how a dev who dedicates his spare time to create a free, non-profit piece of software.
Also they’re not contributing or providing solutions, but feel entitled to demand and criticize. Loving all about it.
Especially true considering that the Lutris team has been looking for active devs for quite some time and is only maintained by a few people. If they have to rely on AI to keep the project alive, maybe the ones complaining should submit some actual code instead of opening issues in their personal crusade.
















