Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

sauce 1

sauce 2

  • ikt@aussie.zone
    link
    fedilink
    arrow-up
    11
    ·
    3 hours ago

    this is some real 2022 style complaint

    most developers are using ai in 2026 in some way, it’s simply too good

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      7
      ·
      1 hour ago

      “it’s simply too good”

      Tell that to code reviews I’ve been rejecting because strong disagree. People are using it because they swallowed the snake oil, doesn’t mean we can’t keep fighting against it.

    • pixxelkick@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      2 hours ago

      People malding but its the truth.

      You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.

      Its so common now its a waste of time to label it, you should just assume AI was involved at this point.

      • commander@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        Where I work, the company has a ChatGPT contract that’s used as a coding assistant tool in VS Code and I imagine also for the admin/contract/legal people doing what they do. Every contracting company developer I’ve worked with, their company has some enterprise ChatGPT/Claude/Gemini/etc. I’ve talked to software developers at large companies that raved about what they could do with enterprise Claude and enterprise Cisco AI coding tools

        Pretty much everyone I know at the minimum uses the Gemini Google search summary for coding questions/dockerfile/kubernetes/open shift/docker compose/helm/terraform/ansible/bash script/python script/snippets/…

        • Lodespawn@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          16 minutes ago

          It seems like the only people who actually derive value out of it are software developers or middle managers. Every other professional discipline has liability and a need to verify accuracy before actioning something. So beyond reading the AI generated summary on a search engine for non critical things it’s basically useless.

    • mrmaplebar@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      2 hours ago

      I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.

      On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/

      In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic–something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can’t follow the “logic” of what was written. You’re just creating more problems for yourself in the future.

      Personification of probability doesn’t do us any good, open source projects require thoughtful contributions from thinking entities.

      To make matters worse, I think that AI is also not at all suitable for “open source” development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.

      Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? How do you know that the code produced by your LLM is legally safe and not ripped verbatim from someone else’s scraped proprietary codebase? Finally, who is the author and copyright holder of AI generated code?

      Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)