• onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    4 hours ago

    Move Godot to codeberg. Github is too popular. It’s where all the fool kids go and link their LinkedIn to. It’s Microslop all the way down. It’s highly likely the number of real contributors will be higher on Codeberg than Github.

    Also, doesn’t github have a viebcoding platform built into it making it really easy to create AI slop?

    Edit: yep they do! Get the fuck off of github!

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    5 hours ago

    They should close PRs to the public and only accept then from contributors who apply to be vetted.

    It sucks, but that’s really the only good way to prevent spam.

      • addie@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        54 minutes ago

        AI is quite good at solving captchas; better than many humans. And it doesn’t really slow down the sloppers for them to set their machine running, come back in an hour and then solve a puzzle manually to submit it. Couple of minutes of work every day and they can still drown the world in bullshit.

        Something needs to change, but I’m not convinced that would be enough

  • Lung@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    6 hours ago

    Seems like a natural next step is some kinda reputation system for a project’s contributors. If you’ve written 50 successfully merged PRs, you’re certainly less likely to make trash in any method. Create a mentorship heirarchy. It sounds very helpful no matter what. Then the people who have merged 0 PRs to the project will likely work harder too, since they know they will be at the most scrutiny

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      Its trivial for ai to successfully merge 50 prs into the projects of other ai.

      It be frustrating for a beginner that every time they try to contribute they are ignored because they have yet to contribute.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 hours ago

    When the topic of AI submissions flooding open source projects pops up, my immediate reaction is to think "see, this is why you disregard intentions". Because I genuinely believe a lot of the people submitting this slop are trying to help the project, even if in reality they’re harming it, by wasting the maintainers’ time with their crap.They cause harm and deserve to be treated as a source of harm, simple as.

    And while most projects could/should use more money, I don’t think that’s the solution; it allows the devs to handle more workload, sure, but the goal should be to reduce it. I think this will be eventually done through pre-sorting contributors: a cathedral for the outsiders, but a bazaar for the insiders.

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        There’s a lot in your article I agree with. A lot. I could nitpick some of the middle layers, but the conclusion is the same — we should simply disregard intentions, when judging the morality of the actions of someone (incl. ourselves).

        Specially the 7th layer — what you said there is something that has been living in my mind for a long time, but I was never able to phrase it properly.

        About the 8th layer: the bourgeoisie does love to exploit this problem when it helps them to get less blame, since it’s impossible to prove someone doesn’t have good intentions. But I don’t think they created it, I think the problem is older even than our own species, and it comes from developing a theory of mind.

        Thank you for sharing it!

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 hours ago

    I wonder if the influx of slop contributions can be stemmed by a legal document that makes the contributor legally liable for their submission.

    Seems like lawyers have been learning the hard way to be accountable for their slop, perhaps these lessons can be leveraged in the open source community.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      Legally liable for what? Just being bad code? How are you going to enforce that against some kid in another country?

    • subignition@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      6 hours ago

      It’s time to start putting maintainers’ attention behind a paywall. $50 refundable deposit to submit a PR, forfeited if it’s obvious AI slop

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      Real “these kids would be very upset if they could read” situation. Who bothers to pick through the whole EULA before submitting?

      Like any open source mass contribution project that’s gained too much popularity, you need extra firebreaks in between the Open Submission and Final Product.

      That means adding manpower to submission review, QA, etc, which public projects don’t often have.

      Sort of the Achilles Heel of the open source community. AI is just making the vulnerability extra glaring.

    • inclementimmigrant@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      That would be a closing a gate after the horses have escaped situation.

      Letting unverifiable code in would damage to developers and users that wouldn’t be easy to disentangle and erode trust in the product, killing it.