• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    39 minutes ago

    Godot is also weighing the possibility of moving the project to another platform where there might be less incentive for users to “farm” legitimacy as a software developer with AI-generated code contributions.

    Aahhh, I see the issue know.

    That’s the incentive to just skirt the rules of whatever their submission policy is.

  • Luden@lemmings.world
    link
    fedilink
    arrow-up
    6
    ·
    1 hour ago

    I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it’s basically impossible to make something that works!

    • Pyr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      43 minutes ago

      Yes I feel like many people misunderstand AI capabilities

      They think it somehow comes up with the best solution, when really it’s more like lightning and takes the path of least resistance. It finds whatever works the fastest, if it even can without making it up and then lying that it works

      It by no means creates elegant and efficient solutions to anything

      AI is just a tool. You still need to know what you are doing to be able to tell if it’s solution is worth anything and then you still will need to be able to adjust and tweak it

      It’s most useful for being able to maybe give you an idea on how to do something by coming up with a method/solution you may not have known about or wouldn’t have considered. Testing your own stuff as well is useful or having it make slight adjustments.

      • AnUnusualRelic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 minutes ago

        It finds whatever works the fastest

        For a very lax definition of “works”…

        Kind of agree with the rest of your points. Remember though, that the suggestions it gives you, for things you’re not familiar with may very well be terrible ones that are frowned upon. So it’s always best to triple check what it outputs, and only use it for broad suggestions.

  • derAbsender@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 hours ago

    Stupid question:

    Are there really no safe guards to the merging process except for human oversight?

    Isnt there some “In Review State” where people who want to see the experimental stuff, can pull this experimental stuff and if enough™ people say “This new shit is okay” it gets merged?

    So the Main Project doesnt get poisoned and everyone can still contribute in a way and those who want to Experiment can test the New Stuff.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 hours ago

      Most projects don’t have enough people or external interest for that kind of process.

      It would be possible to establish some tooling like that, but standard forges don’t provide that. So it’d feel cumbersome.

      And in the end you’re back at having contributors, trustworthiness, and quality control. Because testing and reviewing are contributions too. You don’t want just a popularity contest (I want this) nor blindly trust unknown contribute.

    • Little8Lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      It would be nice to bump upthe useful stuff through the community but even then there could be bot accounts that push the crap to the top

  • ZeroOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    25 minutes ago

    So I guess it is time to switch to a different style of FOSS development ?

    The cathedral style, which is utilized by Fossil, basically in order to contribute you’ll have to be manually included into the group. It’s a high-trust environment where devs know each other on a 1st-name basis.

    Oh BTW, Fossil is a fully-fledged alternative to Git & Github. It has:

    • Version-Tracking
    • Webserver
    • Bug-tracker
    • Ticketting-system
    • Wiki
    • Forum
    • Chat
    • And a Graphical User-Interface which you can theme

    All in One binary

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      60 minutes ago

      What if I want to contribue to a FoSS project because I’m using it but I don’t want to make new friends?

    • RemADeus@thelemmy.club
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 hours ago

      That is a wonderful method because it works in a similar way of many FediVerse server administrators admitting people to new accounts. This way is the slop is immediately filtered away

      • ZeroOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        23 minutes ago

        Why would your code be embarassing ? Yes I get it, but so what But at least it’s not AI-Slop, you fork it & do your own thing.

        It’s not a perfect solution

      • nightlily@leminal.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        It’s discussed in the Bluesky thread but the CI costs are too high on Gitlab and Codeberg for Godot‘s workflow.

      • e8d79@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        18
        ·
        3 hours ago

        Codeberg is cool but I would prefer not having all FOSS project centralised on another platform. In my opinion projects of the size of Godot should consider using their own infrastructure.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 hours ago

          Let’s be realistic. Not everyone is going to move to Codeberg. Godot moving to Codeberg would be decentralizing.

  • lmr0x61@lemmy.ml
    link
    fedilink
    English
    arrow-up
    129
    ·
    11 hours ago

    Damn, Godot too? I know Curl had to discontinue their bug bounties over the absolutely tidal volume of AI slop reports… Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

    • luciferofastora@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

      This is the perpetual issue, not just with AI: Any system will have flaws and weaknesses, but often, they can generally be papered over with some good will and patience…

      Until selfish, immoral assholes come and ruin it for everyone.

      From teenagers using the playground to smoke and bury their cigs in the sand, so now parents with small children can’t use it any more, over companies exploiting legal loopholes to AI slop drowning volunteers in obnoxious bullshit: Most individual people might be decent, but a single turd is all it takes to ruin the punch bowl.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 hours ago

      Then get ready for people just making slop libraries, not because people are dissatisfied with existing solutions (such as I did with iota, which is a direct media layer similar to SDL, but has better access to some low-level functionality + OOP-ish + memory safe lang), but just because they can.

      I got a link to a popular rectpacking algorithm pretty quickly after asking in a Discord server. Nowadays I’d be asked to “vibecode it”.

      • Jankatarch@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        Can confirm the last part. I am in Uni and if anyone ever asks questions on the class groupchats then first 5-6 answers will be “ask chatgpt.”

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 hours ago

      Unfortunately it’s a general theme in Open Source. I lost almost all motivation for programming in my free-time because of all these AI-slop(-PRs). It’s kinda sad, how that Art (among others) is flooded with slop…

  • zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 hours ago

    What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.

    I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.

        • mcv@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.

          But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

          All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.

          • uniquethrowagay@feddit.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 hours ago

            But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

            That’s the problem with LLMs in general, isn’t it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.

            They can maybe be used to get a first draft for an E-Mail you don’t know how to start. Or to write a “funny” poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It’s maddening.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.

        • anon_8675309@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.

          Don’t be a mediocre dev.

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            42 minutes ago

            Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.

            It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.

            So how would you fight the wave of useless PR’s?

        • Chais@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 hours ago

          So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.

        • porous_grey_matter@lemmy.ml
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          6 hours ago

          So what you’re saying is directly contradictory to your previous comment, in fact it doesn’t produce good code even when you tell it to.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      7 hours ago

      You’re absolutely right. I haven’t realized that I can just tell it to write good code. Thank you, it changed my life.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    171
    ·
    edit-2
    12 hours ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      62
      ·
      10 hours ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        36
        ·
        6 hours ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      127
      ·
      12 hours ago

      I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

      AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

      Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      88
      ·
      12 hours ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          9 hours ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

  • xkbx@startrek.website
    link
    fedilink
    English
    arrow-up
    37
    ·
    13 hours ago

    Couldn’t you just set up actual AI/LLM verification questions, like “how many r’s in strawberry?”

    Or even just have an AI / Manual contribution divide. Wouldn’t stop everything 100% but might help the clean-up process better

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Yeah but that won’t stop people from manually submitting prs made with AI. A lot of the slop isn’t just automated pull requests but people using AI to find and fix “bugs”, without understanding the code at all.

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      81
      ·
      13 hours ago

      Those kind of challenges only work for a short while. Chatgpt has solved the strawberry one already.

      That said, I wish these AI people would just create their own projects and contribute to them. Create a LLM fork of the engine, and go nuts. If your AI is actually good, you’ll end up with a better engine and become the dominant fork.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 hours ago

        People who submit AI-generated code tend to crumble, or sound incomprehensible, in the face of the simplest questions. Thank goodness this works for code reviews… because if you look at AI CEO interviews, journalists can’t detect the BS.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          LLMs are magic at everything that you don’t understand at all, and they’re horrifically incompetent at anything you do actually understand pretty well.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        44
        ·
        12 hours ago

        They don’t want to do it in a corner where nobody can see, they want to push it on existing projects and attempt to justify it.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            7 hours ago

            Use open source maintainers as free volunteers check whether your AI coding experiment works.

      • new_guy@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        12 hours ago

        There’s a joke in science circles that goes something like this:

        “Do you know how they call alternative medicine that works? Just regular medicine.”

        Good code made by LLM should be indistinguishable from code made by an human… It would simply be “just code”.

        It’s hard to create a project the size of Godot’s and not have a human in the loop somewhere filtering the slop and trying to create a cohesive code base. At that poin they either would be overwhelmed again or the code would be unmaintainable.

        And then we would go full circle and get to the same point described by the article.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 hours ago

          At the risk of drawing the ire of people…

          … I have a local LLM that I run as a primarily a coding assistant, mostly for GDScript.

          I’ve never like, submitted anything as a potential commit to Godot proper.

          But dear lord, the amount of shennanigans I have had to figure out just to get an LLM to even understand GDScript’s syntax and methods properly is… substantial.

          They tend to just default back to using things that work in Python or JS, but… do not work or exist in GDScript.

          Like one recurring quirk is they will keep trying to use ? ternary instead of if x else(if) y constructions.

          That or they will constantly fuck up trying to custom sorting properly, they’ll either do it syntactically wrong, or, just hallucinate various kinds of set/array methods and properties that don’t exist in GDScript.

          And its a genuine stuggle to get them to comprehend more than roughly 750 lines of code at the same time, without confusing themselves.

          It is possible to use an LLM to be like, hey, look at this code, help me refactor it to be more modular, or, standardize this kind of logic into a helper function… but you basically have to browbeat them with a custom prompt that tells them to stop doing all these dumb, basic things.

          Even if you tell them in conversation " hey you did this wrong, heres how it actually works ", it doesnt matter, keep that conversation going and they will forget it and repeat the mistake… you have to have it contstantly present in the prompt.

          The amount of babysitting and constantly telling an LLM the number of errors it is making is quite substantial.

          It can be a thing that makes some sense to do in some situations, but it is extremely, extremely far away from ‘Make a game for me in Godot’, or even like ‘Make a third person camera script’.

          You have to break things down into much, much more conceptually smaller chunks.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          20
          ·
          12 hours ago

          They can fork Godot and let their LLMs go at it. They don’t have to use the Godot human maintainers as free slop filters.

          But of course, if they did that, their LLMs would have to stand on their own merits.

    • turboSnail@piefed.europe.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      How about asking it to write a short political speech on climate change. Then, just count the number of rhetoric devices and em-dashes. A human dev wouldn’t be bothered to write anything fancy or impactful when they just want to submit a bug fix. It would be simple, poorly written, and filled with typos. LLMs try to make it way too impressive and impactful.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        The funnier thing is when you try to get an LLM to do like, a report on its creators.

        You can keep feeding them articles detailing the BS their company is up to, and it will usually just keep reverting to the company line, despite a preponderance of evidence that said company line is horseshit.

        Like uh, try to get an LLM to give you an exact number of uh, how much will this conversation we are having, how much will that increase RAM prices in a 3 month period?

        What do you think about ~95% of companies implementing ‘AI’ into their business processes reporting a 0 to negative boost to productivity?

        What are the net economic damages of this malinvestment?

        Give it a bunch of economic data, reports, etc.

        Results are usually what I would describe as ‘comical’.

      • Pamasich@kbin.earth
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        I mean, ChatGPT can do it. I just tested it. And if you run your own AI, you can probably remove most such rules anyway.