A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • magikmw@piefed.social
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 hours ago

    Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 hours ago

      Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

      In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.

      I’d sooner have a drink with a salesman from OpenAI than with one of them.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    4 hours ago

    To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

    I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

    Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

    I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.

      • ChocolateFrostedSugarBombs@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 hours ago

        And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.

        No amount of output is worth that cost, even if it was always accurate with no unethical training.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    ·
    5 hours ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

      It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

      Also, you should let your community make ethics decisions about whether to support you.

      Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      ·
      5 hours ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 hours ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          45
          ·
          5 hours ago

          I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

          Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            ·
            5 hours ago

            Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

  • peacefulpixel@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 hours ago

    if you’re going to stoop so low as to use fucking AI have the decency to show it so people with actual standards know to avoid it. but to be fair, a cat n mouse game of whether it was used or not would make me avoid it anyway

      • peacefulpixel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        if you don’t want people to complain about you using AI, then don’t use AI. it’s easier than you think

        • 4am@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 hours ago

          This guy gets it.

          Be open about it. Many people will not like it. Many people will not trust your product any longer. You need to be willing to let those people go with grace, or else you’re already taking on a project you can’t handle.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    113
    ·
    edit-2
    7 hours ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      32
      ·
      6 hours ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        16
        ·
        5 hours ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 hours ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

      • yucandu@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 hours ago

      I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

      Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 hours ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      72
      ·
      7 hours ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • silver_wings_of_morning@feddit.dk
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        Speaking only on the programming part of the slop machine, programmers typically copy code anyways. It’s not an ethical issue for a programmer using a tool that has been trained on other people’s “stolen” code.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        4 hours ago

        If the developer isn’t able to keep up, they should look for (co-)maintainers.

        Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          I was under the impression that FOSS developers do it for the love of the game and not for monetary compensation. They’re literally putting the software out for free even though they don’t need to. They are going to be making this shit regardless.

          • tempest@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            That is what they are technically doing but they often don’t always consider the consequences and often react poorly when they realize that an Amazon (it whatever) comes along and contributes nothing and monetizes their work while dumping the support and maintenance on them.

            That is the name of the game though if you use an MIT license.

        • deadcade@lemmy.deadca.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

          If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

          FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        43
        ·
        7 hours ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          50
          ·
          7 hours ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

          • lumpenproletariat@quokk.au
            link
            fedilink
            English
            arrow-up
            13
            ·
            6 hours ago

            More reason to destroy copyright.

            Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

        • Beacon@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          5 hours ago

          We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

          (And personally imo there should also be some nuanced exceptions too.)

          • Luminous5481 "Lawless Heathen" [they/them]@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            7 hours ago

            Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

            • astro@leminal.space
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

      • Goretantath@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

    • Dettweiler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      31
      ·
      7 hours ago

      It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.

      I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

      I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        It’s the same for me.

        I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

        I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

        This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 hours ago

      Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

      However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

      As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

    • drolex@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 hours ago
      • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
      • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
      • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
        • Dremor@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          37
          ·
          edit-2
          7 hours ago

          Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
          If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

        • adeoxymus@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          ·
          7 hours ago

          Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 hours ago

            Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            23
            ·
            7 hours ago

            LLMs have stolen works from more than just artists.

            ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

            so there’s more than just lutris stollen.

            • Lung@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              7 hours ago

              So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

              • wholookshere@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 hours ago

                The fuck you talking about.

                Using a tool with billions of dollars behind it robinhood?

                How is stealing open source prihcets code regardless of license stealing fr corporation’s?

                • Lung@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 hours ago
                  • he’s not anthropic, and doesn’t have billions of dollars
                  • stealing from open source is not stealing, that’s the point of open source
                  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 hours ago

      “If” doing all the lifting here.

      If we ignore the mountain of evidence saying the opposite…

    • Kowowow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

  • aksdb@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 hours ago

    Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.

    Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 hours ago

      Culture war? Lol

      Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            I also brought facts and objective reasoning, yet I get downvoted.

            Yet anecdotal comments like “I tested it myself and it sucks” get upvoted; apparently simply because it fits the own worldview.

            That’s not polarization to you?

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              It’s for sure a polarizing topic, I just don’t see how it’s a culture war. “Sub-culture war” maybe?

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                Ok maybe I mis-use the word. If that’s the case, sorry about that. But I hope my point comes across anyway: I really really dislike that the community (or multiple communities, even) get split between people who are ok with AI and who are against AI. This is, IMO, completely unnecessary. That doesn’t mean everyone should be ok with it, but we should not judge or condemn each other because of a different opinion on the matter.

                If you notice a project goes downhill, it’s fine to criticize the author (or the whole project) for the degredation in quality. If there are strong indicators that AI is involved, by all means leave a snarky remark about that while complaining. But ultimately it’s the fuckup of a human.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  What you’re taking issue with though is deeper than ai. It’s online discourse that is so rude and nuance-less.

                  In any case, this thread is full of people saying things like “that’s his right to do this but he communicated poorly about this” and getting piles of upvotes. So, yes ai is very polarizing in this corner of the Internet, but I think it’s much more at issue here that people don’t like his handling of it. I know that personally if it weren’t for that I probably would’ve thought “hmm sounds sketchy to use ai in a product thousands of people depend on” and kept scrolling. But no, he was a dick about it and is now hiding his use of ai moving forward. So the people who hate AI are extra pissed about it. Likely because they fear others will follow that lead and enshittify the software they currently enjoy.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            The way flat earthers act? Yes. They treat it as a culture war. Just like anti-vaxers.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 hours ago

          As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        5 hours ago

        That is for each developer to decide, if they can handle it or not.

        As I said: judge the result, not the workflow.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          judge the result, not the workflow.

          This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 hours ago

            Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?

            You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.

            It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          13
          ·
          5 hours ago

          As I said: judge the result, not the workflow.

          I’ve tested AI myself and seen the results. I’ll judge how I see fit.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 hours ago

              There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.

              Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.

              It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.

              My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

              You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.

              I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.

              I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).

              They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.

              There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.

              Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 hours ago

                Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

                My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

                That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

                And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

                The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.

  • absquatulate@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    5 hours ago

    Getting real tired of this armchair activism, man. I get it, we all hate LLMs but it’s literally one or two burnt out guys writing this in their spare time. If people really want to do something useful at least go and review the code and then you can shit on his work for legitimate reasons if you really do find it’s bad. Stop demonizing open source devs ffs.

    • froufox@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      No wonder they burn out more and more. Nobody wants to contribute and help, but everyone is quick to criticise

  • warm@kbin.earth
    link
    fedilink
    arrow-up
    31
    ·
    7 hours ago

    These AI people are so delusional. They contradict themselves immediately.

    But I have over 30 years of programming experience

    Then you don’t need AI.

    In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

    ??? The common denominator is AI. By using it you are part of the problem. All mainstream AI is trained on stolen data.

    I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services.

    Then don’t?

    There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves.

    The “tools” require large amounts of storage, RAM, electricity, water etc etc. The only tool is the end user.

    Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

    So they are just an asshole and their excuse finding is just irrelevant.

    • froufox@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      6 hours ago

      This guy is maintaining that huge project for an eternity in his free time, but entitled hypocrits like you have audacity to call him an asshole. No one needs your recommendations. Even if you have experience to maintain and develop a project for 16 years and your brain is capable to keep everything in the context, and type hundreds of lines manually for the most tedious tasks—good for you, but there are different people with different brains. AI helpers with proper tooling is a good instrument in hands of a good engineer. They are basically better autocomplete and searching tools, and they are amazing ‘rubber duck’ companions making coding process psychologically easier if you stressed, anxious, or depressed, but need the job to be done. If you think what you’re doing, you won’t produce slop whatever instrument you use, if not—you’ll write slop without AI.

      When bubble pops soon, AI have to become sustainable economically and ecologically. Same happened during the dotcom bubble.

      So, either help the project, or leave opensource devs alone

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        6 hours ago

        Nobody is beyond reproach, and nobody gets free passes, especially with the flagrant attitude they’ve shown toward concerns and criticism.

        • froufox@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Because current AI products attract huge investments and do not pay off at all. Basically they companies have unlimited money, and spend them on building huge data centres and facilities. But as soon as finding will run out, they have to moderate appetites. Many companies will flop and go bankrupt or just switch to something else. Cheap and local LLMs with high efficiency/cost rate should be dominant, as they won’t need so much infrastructure to support. Kinda similar was during dotcom bubble, but average person didn’t knew and care about ecology and ethics so much as nowadays.

          That’s why I hate when “morally impeccable” people find an easy target, make it a scapegoat, and bully them online.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Yeah, how is the insane water consumption a capitalism problem? Do China’s datacenters not need water?

  • Omega_Jimes@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    7 hours ago

    I don’t support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don’t have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.

    If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.

  • Retail4068@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    You’re going to screech at this guy contributing his time and code, who in all likelihood will pump out more features. Absurd. Prejudice and fear has blinded a significant portion of the foss community

  • LostWanderer@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    7 hours ago

    The thing that will change society is the fact this dude is covering up all the Claude LLM contributions, that transparency which fosters trust in projects is fundamentally broken. He is creating a narrative that will allow others to simply use LLM sourced code and hide it in human created code. I don’t like that, it’s pretty disgusting in my opinion, as people should be able to see every bit of code and know who is responsible for it. Mathieu Comandon’s integrity is shattered by this serious trespass and it is one that he shouldn’t be allowed to get away with. Put him on blast for that otherwise, others will try to do the same thing, potentially reducing the quality of open source projects. Claude LLM usage was already rancid enough…Enough for me to blacklist the whole thing.

    As a lover of open source, I plan to skill up and start contributing myself…As there is a reality of not having enough time or people power to maintain such massive projects that have a big scope. I’m in the midst of learning the basics and figuring out what the best programming languages are to learn (Python is going to be my first). I don’t want the infection of LLMs to spread any more in the open source community…As there is no way that will turn out to be a net positive for the community.

    • wagesj45@fedia.io
      link
      fedilink
      arrow-up
      9
      ·
      6 hours ago

      It wouldn’t be such a big deal if you weren’t facing immense harassment for using these tools. I don’t blame him for saying fuck it. If the code works, and has been reviewed/modified/approved by a human, then who cares.

      • iamthetot@piefed.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 hours ago

        I care. GenAI and AI bullshit is general is a massive ethical, environmental, and socio-economic mine field.

      • LostWanderer@fedia.io
        link
        fedilink
        arrow-up
        9
        ·
        6 hours ago

        Simply accepting the use of LLM tools is going to send the incorrect message, as that can be masked as approval. It will make techbros that peddle slopware bolder, however, I don’t condone harassment (be loud, clear, and don’t harass). It does matter because again, transparency is key and that builds trust in open source projects. You might not care, but there are a lot of people that feel integrity in the code base matters as you are running that shit on your machine if you install it. To hide the sources of code, that is closed source behavior, and we cannot even properly evaluate a lot of the code they sling.

          • LostWanderer@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            One fellow internet user, I specifically mentioned, “integrity in the code base matters as you are running that shit on your machine if you install it”, not “integrity of their machine”. Two, I’m going to be real with you, while you are highlighting a very real concern…You are trying to distract from the harm that LLMs and how their usage in programming can break the integrity of code bases.

            The rush to develop LLMs, the data centers that will fuel this boom, and the components used to create computational power…Is just as damaging to poor countries that have rich mineral resources, leading to heavy pollution due to high demand for those resources.

            This whole push for LLM development is best summarized by iamthetot, who also replied to wagesj45, “I care. GenAI and AI bullshit in general is a massive ethical, environmental, and socio-economic mine field.”

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      While I fully agree with what you’re saying here, and that it should be stated, I personally believe that the only thing he’s done here is said the quiet part out loud.

      Like other major projects of are stating that the main reason they don’t do a full AI ban is due to the fact that it’s increasingly difficult to be able to look at someone’s code contributions and say, yes, that’s AI versus that’s a human.

      I recently made the swap-off of Sublime Text to Visual Studio Code because I was sick of the degradation in Sublime Text and there wasn’t any decent alternatives with the depreciation of atom a few years back.

      I was amazed to find that OOtB visual code has a full on AI assisted coding setup with Ai assisted auto completion and suggestions and even has a chat box to talk with the model of choice. This setup by default doesn’t add any credits or attribution, and while isn’t anywhere near as intequate claude setup by default, it’s still AI assisted writing.

      The only thing the public brigades are actually doing is making contributors hide that they are using it, which increases the problem like you mentioned.

      A much better solution would be people stepping up to the plate and helping these projects, but it’s far easier to complain. I firmly understand why contributors have resorted to hiding the fact they use it, there’s far too much public outcry without enough support to not on most open sourced or publicly supported projects.

      • LostWanderer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 hours ago

        “The only thing the public brigades are actually doing is making contributors hide that they are using it, which increases the problem like you mentioned.”

        As things are now, it would be best to eschew the use of LLMs; because LLMs are tainted with the dark, ignoble goals of Big Tech. In order to stop Big Tech’s plan, we need to object, reject, and force LLMs to become a money drain. Dropping LLMs would only help to burst this cursed bubble that techbros are desperately trying to keep inflated.

        “A much better solution would be people stepping up to the plate and helping these projects, but it’s far easier to complain. I firmly understand why contributors have resorted to hiding the fact they use it, there’s far too much public outcry without enough support to not on most open sourced or publicly supported projects.“

        If people are outright rejecting LLMs, it is better to drop these tools instead of embracing these things and using them in secret. Part of my drive to learn how to program is contributing to open source projects, but, the fact some of them embrace the use of LLMs to develop puts me off. However, despite this, learning to code and to contribute is of the utmost importance in order to help preserve the integrity of open source projects.

        The things that are falsely called “AI” is a demotivating factor, as people start to feel the futility of learning if a thing that cannot think or feel might trump them and be used instead of them having a job in tech. It is going to create a brain drain event, because if there is not enough fresh blood staying interested in a field like programming and software dev…That will damage a lot of open source projects, and even the Linux Kernel. People age out and when those old heads die, all that institutional knowledge will go with them. As very few people will be able to carry the torch and that essential knowledge for the next generation. Big Tech doesn’t understand the full impact of their actions…They are greedy, disruptive fucks.

  • theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    7 hours ago

    While I may not agree that letting AI write the code is a good idea, the complaints are dumber. It’s open source. Just fork it if you have a problem with it.

    Trying to hide it is shitty and immature, though. Even more reason to just fork it. They are proving they don’t have the maturity or transparency needed to run a project like that.

    • Nilz@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I hear this argument a lot about many things but how is “just fork it” the answer? It’s not like just anyone can fork any project and continue developing it. The alternative would be forking it and consider it final version or what?

      • ogeist@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        But if it’s your project and you build the software for yourself, then why should he do what someone else wants?

        The just fork it argument is valid as the source is open, there are several projects born from forks. Specifically for game/wine managers there are other options…

        Regardless of the slop, I’m grateful to the developers and maintainers that take the time to share their work.

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        It’s not like just anyone can fork any project and continue developing it.

        Why not? Happens all the time.

        I hear this argument a lot about many things

        Perhaps there’s a reason you hear it so often about so many things?

        This same principle also applies to the fediverse. An instance makes policy decisions that you don’t agree with? That’s okay, you can always host your own instance and make your own policies.

        • Nilz@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Well sure, software gets forked and continued all the time, but there’s quite a stark difference between just using open source software or actively maintain it. Not everyone is a software developer, so I still don’t see why “just fork it” is the answer. Those who have the capabilities probably already thought of it no?

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            5 hours ago

            software gets forked and continued all the time

            If you understand that this is true, then I don’t really understand your argument. If this happens all the time for other software, then why won’t it for Lutris? You’re just saying that people who are not software developers cannot develop software? Okay… yeah.

            Those people are already completely dependent on software developers and their choices for all of the software they use, whether closed or open source, anyway.

              • theunknownmuncher@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                Okay, so yeah if you’re not a software developer, then forking it and developing the software is not an option for you. Your only option is to simply continue waiting for all of the software you use to be created by other people and handed to you, like you do already.