A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • southsamurai@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 hours ago

    Yeah, this is actually one of the good things a technology like this can do.

    He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 hours ago

    I think the simple fact is that some of the people in this thread don’t understand is that the people they’re asking to vet the code don’t know how.

    They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don’t know that most of the people in opposition to their comments understand that context.

    I haven’t coded anything since the 90’s. I know HTML and basic CSS and that’s it. I wouldn’t have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I’d still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn’t know where to start to make a program. It’s not part of my skill set.

    Most users are like that. They engage with only parts of a thing. It’s why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.

    It’d be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn’t retract. A lot of people wouldn’t know where to start.

    I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.

    But the way I see it there’s two different groups and they have very different views of this situation.

    The people who can’t code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.

    If those people choose to boycott, it’ll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.

    The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there’s something wrong.

    I suppose there’s a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.

    Humans certainly aren’t infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren’t going to make something up unless they have an ulterior motive.

    Perhaps breaking things down into tiny chunks makes AI better or it’s outputs more usable. Maybe there’s a 'sweet spot".

    But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that’s dangerous for many reasons.

    This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.

    I know that from experience. So in this case does the AI have more potential to help or do harm?

    There’s a lot to this. I have not personally used Lutris, but before this happened I wouldn’t have thought twice about saying that I’ve heard good things about it if someone asked me for a Heroic launcher style software for Linux.

    But just like the Ladybird fork of Firefox I don’t know that I feel comfortable suggesting it if this is the state of things. For the same reason I don’t currently feel comfortable recommending Windows 11 or Chrome.

    There are so many sensitive things that OS’s, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn’t swimming in a big pond of sensitive information but it is running on people’s hardware and they should have both the right to be informed and the right to choose.

  • darkangelazuarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 hours ago

    If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.

  • vortexal@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    In this particular case, I think the use of AI is tolerable. But as someone who uses Lutris sometimes, I do have concerns about whether or not this will cause issues with running games through it. How do we know if the AI generated code is going to make Lutris slow or possibly cause games to not work properly that otherwise would have worked perfectly fine?

    Whenever I’ve tried running games in both Wine by itself and Lutris, I have noticed that they do often run noticeably slower in Lutris. And I also don’t have the best PC to begin with, so this is a big concern of mine.

  • magikmw@piefed.social
    link
    fedilink
    English
    arrow-up
    34
    ·
    6 hours ago

    Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 hours ago

      Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

      In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.

      I’d sooner have a drink with a salesman from OpenAI than with one of them.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    6 hours ago

    To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

    I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

    Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

    I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives.

      I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.

      • ChocolateFrostedSugarBombs@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 hours ago

        And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.

        No amount of output is worth that cost, even if it was always accurate with no unethical training.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    116
    ·
    8 hours ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      51
      ·
      7 hours ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        26
        ·
        6 hours ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          53
          ·
          7 hours ago

          I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

          Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            7 hours ago

            Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

      It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

      Also, you should let your community make ethics decisions about whether to support you.

      Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 hours ago

        Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

  • peacefulpixel@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 hours ago

    if you’re going to stoop so low as to use fucking AI have the decency to show it so people with actual standards know to avoid it. but to be fair, a cat n mouse game of whether it was used or not would make me avoid it anyway

      • peacefulpixel@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        6 hours ago

        if you don’t want people to complain about you using AI, then don’t use AI. it’s easier than you think

        • 4am@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 hours ago

          This guy gets it.

          Be open about it. Many people will not like it. Many people will not trust your product any longer. You need to be willing to let those people go with grace, or else you’re already taking on a project you can’t handle.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    133
    ·
    edit-2
    10 hours ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      39
      ·
      8 hours ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        23
        ·
        8 hours ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 hours ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • BackgrndNoize@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            5 hours ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

      • yucandu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 hours ago

      I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

      Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 hours ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      76
      ·
      10 hours ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • silver_wings_of_morning@feddit.dk
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 hours ago

        Speaking only on the programming part of the slop machine, programmers typically copy code anyways. It’s not an ethical issue for a programmer using a tool that has been trained on other people’s “stolen” code.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        7 hours ago

        If the developer isn’t able to keep up, they should look for (co-)maintainers.

        Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

        • Vlyn@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Hey, if your project is important enough you might get your own Jia Tan (:

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 hours ago

          I was under the impression that FOSS developers do it for the love of the game and not for monetary compensation. They’re literally putting the software out for free even though they don’t need to. They are going to be making this shit regardless.

          • Ganbat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            My point was “Help me with my passion project for nothing” is a much harder sell. “Just find some help,” is advice along the lines of “Just get in a plane and fly it.”

          • tempest@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            That is what they are technically doing but they often don’t always consider the consequences and often react poorly when they realize that an Amazon (it whatever) comes along and contributes nothing and monetizes their work while dumping the support and maintenance on them.

            That is the name of the game though if you use an MIT license.

        • deadcade@lemmy.deadca.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

          If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

          FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        43
        ·
        10 hours ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          51
          ·
          10 hours ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

          • lumpenproletariat@quokk.au
            link
            fedilink
            English
            arrow-up
            13
            ·
            8 hours ago

            More reason to destroy copyright.

            Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

          • Luminous5481 "Lawless Heathen" [they/them]@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            9 hours ago

            Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

            • astro@leminal.space
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 hours ago

              It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

        • Beacon@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          7 hours ago

          We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

          (And personally imo there should also be some nuanced exceptions too.)

      • Goretantath@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

    • Dettweiler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      35
      ·
      9 hours ago

      It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.

      I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 hours ago

      If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

      I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 hours ago

        It’s the same for me.

        I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

        I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

        This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      9 hours ago

      Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

      However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

      As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

    • drolex@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 hours ago
      • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
      • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
      • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
        • Dremor@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          37
          ·
          edit-2
          10 hours ago

          Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
          If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

        • adeoxymus@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          ·
          9 hours ago

          Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            8 hours ago

            Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            24
            ·
            10 hours ago

            LLMs have stolen works from more than just artists.

            ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

            so there’s more than just lutris stollen.

            • Lung@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 hours ago

              So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

              • wholookshere@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                7
                ·
                6 hours ago

                The fuck you talking about.

                Using a tool with billions of dollars behind it robinhood?

                How is stealing open source prihcets code regardless of license stealing fr corporation’s?

                • Lung@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 hours ago
                  • he’s not anthropic, and doesn’t have billions of dollars
                  • stealing from open source is not stealing, that’s the point of open source
                  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 hours ago

            No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 hours ago

      “If” doing all the lifting here.

      If we ignore the mountain of evidence saying the opposite…

    • Kowowow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

  • warm@kbin.earth
    link
    fedilink
    arrow-up
    36
    ·
    9 hours ago

    These AI people are so delusional. They contradict themselves immediately.

    But I have over 30 years of programming experience

    Then you don’t need AI.

    In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

    ??? The common denominator is AI. By using it you are part of the problem. All mainstream AI is trained on stolen data.

    I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services.

    Then don’t?

    There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves.

    The “tools” require large amounts of storage, RAM, electricity, water etc etc. The only tool is the end user.

    Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

    So they are just an asshole and their excuse finding is just irrelevant.

    • froufox@lemmy.blahaj.zone
      cake
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      9 hours ago

      This guy is maintaining that huge project for an eternity in his free time, but entitled hypocrits like you have audacity to call him an asshole. No one needs your recommendations. Even if you have experience to maintain and develop a project for 16 years and your brain is capable to keep everything in the context, and type hundreds of lines manually for the most tedious tasks—good for you, but there are different people with different brains. AI helpers with proper tooling is a good instrument in hands of a good engineer. They are basically better autocomplete and searching tools, and they are amazing ‘rubber duck’ companions making coding process psychologically easier if you stressed, anxious, or depressed, but need the job to be done. If you think what you’re doing, you won’t produce slop whatever instrument you use, if not—you’ll write slop without AI.

      When bubble pops soon, AI have to become sustainable economically and ecologically. Same happened during the dotcom bubble.

      So, either help the project, or leave opensource devs alone

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        3
        ·
        54 minutes ago

        Lmao. I have nothing but respect for FOSS projects and have donated plenty of money to many.

        What I am not supporting is the use the of AI, if they want to, then they can, I will just stop using their project.

        Yes, you are an asshole if you dont disclose it and the comments I’ve seen from him just cements that for me more.

        AI can fuck off, we havent needed it for decades and decades, we dont need it now.

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        8 hours ago

        Nobody is beyond reproach, and nobody gets free passes, especially with the flagrant attitude they’ve shown toward concerns and criticism.

        • froufox@lemmy.blahaj.zone
          cake
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Because current AI products attract huge investments and do not pay off at all. Basically they companies have unlimited money, and spend them on building huge data centres and facilities. But as soon as finding will run out, they have to moderate appetites. Many companies will flop and go bankrupt or just switch to something else. Cheap and local LLMs with high efficiency/cost rate should be dominant, as they won’t need so much infrastructure to support. Kinda similar was during dotcom bubble, but average person didn’t knew and care about ecology and ethics so much as nowadays.

          That’s why I hate when “morally impeccable” people find an easy target, make it a scapegoat, and bully them online.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Yeah, how is the insane water consumption a capitalism problem? Do China’s datacenters not need water?

  • Omega_Jimes@lemmy.ca
    link
    fedilink
    English
    arrow-up
    42
    ·
    10 hours ago

    I don’t support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don’t have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.

    If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.

  • aksdb@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    8 hours ago

    Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.

    Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      8 hours ago

      Culture war? Lol

      Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 hours ago

            I also brought facts and objective reasoning, yet I get downvoted.

            Yet anecdotal comments like “I tested it myself and it sucks” get upvoted; apparently simply because it fits the own worldview.

            That’s not polarization to you?

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 hours ago

              It’s for sure a polarizing topic, I just don’t see how it’s a culture war. “Sub-culture war” maybe?

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Ok maybe I mis-use the word. If that’s the case, sorry about that. But I hope my point comes across anyway: I really really dislike that the community (or multiple communities, even) get split between people who are ok with AI and who are against AI. This is, IMO, completely unnecessary. That doesn’t mean everyone should be ok with it, but we should not judge or condemn each other because of a different opinion on the matter.

                If you notice a project goes downhill, it’s fine to criticize the author (or the whole project) for the degredation in quality. If there are strong indicators that AI is involved, by all means leave a snarky remark about that while complaining. But ultimately it’s the fuckup of a human.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 hours ago

                  What you’re taking issue with though is deeper than ai. It’s online discourse that is so rude and nuance-less.

                  In any case, this thread is full of people saying things like “that’s his right to do this but he communicated poorly about this” and getting piles of upvotes. So, yes ai is very polarizing in this corner of the Internet, but I think it’s much more at issue here that people don’t like his handling of it. I know that personally if it weren’t for that I probably would’ve thought “hmm sounds sketchy to use ai in a product thousands of people depend on” and kept scrolling. But no, he was a dick about it and is now hiding his use of ai moving forward. So the people who hate AI are extra pissed about it. Likely because they fear others will follow that lead and enshittify the software they currently enjoy.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            The way flat earthers act? Yes. They treat it as a culture war. Just like anti-vaxers.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 hours ago

          As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        But that kind of proves their point, right?

        Yes, a lot of projects have had issues with contributers who push unreviewed AI slop that they don’t understand, ultimately creating more work for the project. Or with avalanches of AI code review bug reports that do nothing to help. But that’s not what’s happening here.

        In this case, the main developer of the project is choosing to use AI, on their own terms, because they find it helpful, and people are giving them shit for it. It’s their project and they feel this technology is beneficial. Isn’t that their call to make? Why are people treating the former and the latter as completely interchangeable scenarios when they’re clearly not? It kind of does suggest that people are coming at this from a more ideological rather than rational perspective.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        8 hours ago

        That is for each developer to decide, if they can handle it or not.

        As I said: judge the result, not the workflow.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          14
          ·
          8 hours ago

          As I said: judge the result, not the workflow.

          I’ve tested AI myself and seen the results. I’ll judge how I see fit.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 hours ago

            I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 hours ago

              There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.

              Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.

              It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.

              My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

              You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.

              I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.

              I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).

              They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.

              There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.

              Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                5 hours ago

                Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

                My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

                That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

                And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

                The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.

                • Voroxpete@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  Frankly, most AI generated code is often easier to review, thanks to a combination of standardized practices (LLMs regress to the mean by design) and a somewhat overly enthusiastic approach to commenting and segmented layouts.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 hours ago

          judge the result, not the workflow.

          This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            The thing is, you’re conflating ethical and practical concerns here. The commenter you’re responding to is clearly talking about the practical aspects of using AI tools.

            If you have a fundamental moral issue with AI that is entirely independent of how efficacious it is, that’s fine. That’s a completely reasonable position to hold. But don’t fall into the trap of wanting every use of genAI to be impractical because it aligns with your morality to feel that way.

            If this is an ethical stance that you truly hold, you should be willing to believe that using these tools is bad even when they’re effective. But a lot of people instead have to insist that every use of AI is impractical, in the face of any evidence to the contrary, because they’ve talked themselves into believing that on some fundamental level. Like “If AI is ever useful, that means I’m wrong about it being immoral.”

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 hours ago

            Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?

            You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.

            It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.