• SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    39
    ·
    3 hours ago

    “This works perfectly, which is why I’m removing all ways to audit what it has contributed.”

    • dev_null@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 hours ago

      “because that’s the only way to use it without being harassed online”

      I disagree with his reasons for removing it, but they are pretty clear.

  • TheLastOfHisName@piefed.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 hours ago

    Lovely.
    I haven’t been able to get the Elder Scrolls Online (ESO) to run under Steam lately. I was able to get it running under Lutris, and it was fine until the 5.20 update. Haven’t been able to play at all. It was good while it lasted, I guess. Time to look for a new solution. If anybody has any recs, I’d love to hear them. I’m running Linux Mint 22.3.

    • Grass@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      16 minutes ago

      you should be able to just move the prefix into the folder managed by bottles or heroic, or add the existing folder location to either of those. All of them just present visuals on top of wine/proton. Might also have to manually download and set the wine/proton version from whichever apps built in management.

    • Sophocles@infosec.pub
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 hours ago

      Bottles works much in the same way, and I always prefered it to Lutris. It’s also pretty easy to use plain old Wine if you’re comfy at all in the terminal. Pair it with winetricks and you can run most games with little hastle

  • CoyoteFacts@piefed.ca
    link
    fedilink
    English
    arrow-up
    138
    ·
    7 hours ago

    Whether or not I use Claude is not going to change society

    This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)

    • blackbrook@mander.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 hours ago

      Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        it is another thing to act with disrespect to everyone with a different opinion.

        Unless that opinion is ‘I like using AI’, then they deserved the disrespect.

    • Joelk111@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 hours ago

      Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.

      • CoyoteFacts@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    107
    ·
    edit-2
    7 hours ago

    They are free to do what they want to on their repo.

    We are free to fork if need arises.

    Personally I don’t like projects not showing what AI has made. And most of Claude was made on stolen code. Its against the open source license they themselves use https://github.com/lutris/lutris/blob/master/LICENSE

    But almost no one actually enforces the license until the big companies show up. I hope they change their minds, but until then, im going to stop using/contributing for a while.

    • db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 hours ago

      Does anyone know which was the last version before the dev started shoveling slop in to the repo? The utter dipshit invalidated even the ability to license after that point, those releases are wholly worthless.

      • e8CArkcAuLE@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        in 5 years from now there’s going to be totally coevolved but unique seed-lines for software. the once with AI, and the once without. how can you distinguish them? did the human that said it wrote them really write them? these problems aside, i suspect it will be forced to happen just from a security viewpoint, big companies won’t be able to get any kind of insurance anymore running AI-infested code.

    • nialv7@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 hours ago

      it’s more nuanced than that. Claude is made from stolen code, but it generally isn’t going to copy its training data verbatim (unless specifically told to). so copyright wise it’s more grey than strictly wrong. and though claude is made from stolen code, lutris developers are writing something they give off freely to the world, they are not profiting from the stolen code.

      does this make it ok? i don’t know. what if they use an open weights model rather than a closed one? would that be more acceptable?

    • bdonvr@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      83
      ·
      8 hours ago

      Oh yeah. Here’s another nugget:

      Sometimes, I generate some code with Claude and commit by hand

      Sometimes, I write code manually and ask Claude to commit

      Sometimes, I ask OpenClaw to generate some code, which doesn’t put the Co-Authorship

      Sometimes, the whole thing is AI generated from end to end

      This is also a somewhat recent addition to Claude Code. I was kinda surprised when I first noticed it but didn’t think much of it, I was like “meh, I guess we’re doing that now, whatever, some people might take issue with it, whatever”. Also, do keep in mind that I love trolling people coming in my projects to complain about my methods.

      For those who are anti-AI, it’s a safe assumption that any addition to the project has had some kind of AI interaction during the development process.

      https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355

      • mlfh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        98
        ·
        7 hours ago

        Sometimes, I ask OpenClaw to…

        This person should not be trusted with anything.

        • mavu@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          22
          ·
          5 hours ago

          That is the real shame in all this. I’m certainly not updating lutris any more, because there is no way of knowing what you will install on your system.

          You can trust humans (as in “trusting is an option”). You can never trust an LLM. And admitting that there might be unsupervised commits, being installed on possibly thousands of PCs is terrifying.

          • entropicdrift@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            4 hours ago

            Glad I use Heroic instead. Time to check what their AI policy is.

            Based on some PRs, they’re using github copilot to help with reviews but are generally against vibe coding

  • Ricky Rigatoni@piefed.zip
    link
    fedilink
    English
    arrow-up
    17
    ·
    6 hours ago

    Is this the same Lutris maintainer who took it out of the mint repos because he didn’t like some minor thing they did?

    • saoirse@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      I wouldn’t be shocked if that kinda thing happened lol (or if Mathieu at least tried), but why would Lutris be in the Mint repos anyway?

      They’re pretty small afaik, with the Ubuntu and Debian repos being used for non-Mint specific things

      All I could find was that Lutris “dropped support” for Mint a number of years ago, whatever that means, but Mint is now displayed alongside Ubuntu & ElementaryOS on the downloads page anyways

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        6 hours ago

        As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.

        https://www.morganlewis.com/pubs/2026/03/us-supreme-court-declines-to-consider-whether-ai-alone-can-create-copyrighted-works

        In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.

        • yucandu@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.

          You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            4 hours ago

            That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.

            Also, that’s potentially copyrightable. That hasn’t been settled.

        • dgdft@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 hours ago

          Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.

          TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.

          Quoting further:

          Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.

          For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]

          Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.

          Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.

            Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.

            OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.

        • Alex@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Glad it applies worldwide /s

          Slop can’t be copyrighted, great. We don’t want slop.

    • db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 hours ago

      “AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.

      • bss03@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        29 minutes ago

        I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.

        I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.

        If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.

  • woelkchen@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    7 hours ago

    Just assume everything is AI generated and feel free to ignore the GPLv3 because generated code doesn’t have any copyright. See how he reacts.

      • renegadespork@lemmy.jelliefrontier.net
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        7 hours ago

        The legal effect of AI generated code on software licenses is untested in court and AFAIK has no explicit laws. So really no one knows how it will work yet.

        • yucandu@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 hours ago

          The US Copyright Office has updated its guidelines:

          If AI content is present, the Office will only register the work if the human contributions are sufficiently creative and if the AI-generated portions are supplementary or used as a tool under human direction. Essentially, they ask: “Is the work basically one of human authorship, with the computer merely assisting?” If yes, it can be protected (with a disclaimer that some content isn’t human-made). If no, if the AI’s role overshadows the human’s, then the work, or at least the AI-created portion, is not eligible for copyright.

          In Canada, where I live:

          So, can you claim copyright in an AI-generated work in Canada? As of 2025, the safest answer is: only if a human author contributed substantial creative effort to the final work. There needs to be some human “skill and judgment” or creative spark for a work to be protected.

          If the AI was just a tool in your hands, for instance, you used AI to enhance or assemble content that you guided then your contributions are protected and you are the author of the overall work. But if an AI truly created the material with you providing little more than a prompt or idea, the law may treat that output as having no human author, and thus no copyright.

          For now, anyone using AI in creative projects should keep documentation of their own input and creative choices. Emphasize the parts of the work where you exercised judgment or selected elements because those are likely what copyright will cover. And remember that copyright in AI-generated content is a fast-moving area.

          https://www.foundationsoflaw.com/post/can-you-claim-copyright-in-ai-generated-works-in-canada

          Makes sense to me.

        • Hubi@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          Just assume everything is AI generated

          This is the part that will definitely not work.

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      4 hours ago

      Sometimes, I ask OpenClaw to generate some code

      https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355

      OpenClaw is extremely vulnerable to prompt injection. If the maintainer is using it to author code, you absolutely can not trust that the code is safe from exploits obfuscated as unintentional logic errors or bugs.

      There’s purity testing, and then there’s being cautious about running code made by someone who is doing something incredibly stupid and unsafe. This is the latter.

      • 9WhiteTeeth@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        46 minutes ago

        You are assuming the author is being unsafe & not auditing code for very basic security issues.

        Let me present this angle, small teams of volunteer open source developers finally have a way to help ease the amount of code they produce, but you want them to continue doing all the work manually because AI hurts your feefees.

        Further, you are openly declaring you don’t trust the devs to audit their own code.

        If you can find a security vulnerability in the code (it is open source after all) I’ll cede, but otherwise, I think it is a good thing responsible AI use can help shoulder the work these folks do for our benefit.

  • Alex@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    7 hours ago

    It looks like the issue submitter is trolling a number of projects on their personal anti-AI crusade. I would take it more seriously if they had reviewed any of the PRs and identified issues with them.

    Yes AI slop is an issue (especially for maintainers) but it can still be a useful tool. If the maintainers want to use AI on their own code it should be their choice. Most forks fail because the righteous feeling of finally getting your own way on a repo you control usually falls away as you realise the people actually doing the work didn’t follow you.

    • bdonvr@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      7 hours ago

      Honestly the need for Lutris has gone way way down in the last couple years. I don’t know about forking it, but I think it’d be pretty easy to just avoid it. Less because there’s any concrete issues that I could point out, but more as a political statement and loss of confidence.

  • KiwiTB@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 hours ago

    This explains why it would break constantly… But that’s also why people moved to other solutions.

  • Hubi@feddit.org
    link
    fedilink
    English
    arrow-up
    22
    ·
    7 hours ago

    Meh, I don’t really care. It’s a free product and it does what I need it to. Just open an issue if there’s actually something wrong with the code itself or pick another software if you disagree with the maintainer. There’s really no need for drama here.

    • bdonvr@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      6 hours ago

      It’s more of a political stance.

      For a good example check out Asahi Linux: https://asahilinux.org/docs/project/policies/slop/

      It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.

      The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the “help” of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.

    • Señor Mono@feddit.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      That’s why we cannot have nice things.

      People on the internet going nuclear about how a dev who dedicates his spare time to create a free, non-profit piece of software.

      Also they’re not contributing or providing solutions, but feel entitled to demand and criticize. Loving all about it.

      • Hubi@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 hours ago

        Especially true considering that the Lutris team has been looking for active devs for quite some time and is only maintained by a few people. If they have to rely on AI to keep the project alive, maybe the ones complaining should submit some actual code instead of opening issues in their personal crusade.

  • ianhclark510@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 hours ago

    Honestly? It’s a front end for some other tools to play games on Linux, if someone finds a performance issue or security hole we can react, but if this makes the Maintainers’s life easier I don’t know how much I really care, John Henry can fork it if they want and show how much better work they do than Claude