• CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    81
    ·
    13 hours ago

    Those kind of challenges only work for a short while. Chatgpt has solved the strawberry one already.

    That said, I wish these AI people would just create their own projects and contribute to them. Create a LLM fork of the engine, and go nuts. If your AI is actually good, you’ll end up with a better engine and become the dominant fork.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      People who submit AI-generated code tend to crumble, or sound incomprehensible, in the face of the simplest questions. Thank goodness this works for code reviews… because if you look at AI CEO interviews, journalists can’t detect the BS.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        LLMs are magic at everything that you don’t understand at all, and they’re horrifically incompetent at anything you do actually understand pretty well.

    • warm@kbin.earth
      link
      fedilink
      arrow-up
      44
      ·
      12 hours ago

      They don’t want to do it in a corner where nobody can see, they want to push it on existing projects and attempt to justify it.

        • mcv@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          7 hours ago

          Use open source maintainers as free volunteers check whether your AI coding experiment works.

    • new_guy@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      12 hours ago

      There’s a joke in science circles that goes something like this:

      “Do you know how they call alternative medicine that works? Just regular medicine.”

      Good code made by LLM should be indistinguishable from code made by an human… It would simply be “just code”.

      It’s hard to create a project the size of Godot’s and not have a human in the loop somewhere filtering the slop and trying to create a cohesive code base. At that poin they either would be overwhelmed again or the code would be unmaintainable.

      And then we would go full circle and get to the same point described by the article.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        At the risk of drawing the ire of people…

        … I have a local LLM that I run as a primarily a coding assistant, mostly for GDScript.

        I’ve never like, submitted anything as a potential commit to Godot proper.

        But dear lord, the amount of shennanigans I have had to figure out just to get an LLM to even understand GDScript’s syntax and methods properly is… substantial.

        They tend to just default back to using things that work in Python or JS, but… do not work or exist in GDScript.

        Like one recurring quirk is they will keep trying to use ? ternary instead of if x else(if) y constructions.

        That or they will constantly fuck up trying to custom sorting properly, they’ll either do it syntactically wrong, or, just hallucinate various kinds of set/array methods and properties that don’t exist in GDScript.

        And its a genuine stuggle to get them to comprehend more than roughly 750 lines of code at the same time, without confusing themselves.

        It is possible to use an LLM to be like, hey, look at this code, help me refactor it to be more modular, or, standardize this kind of logic into a helper function… but you basically have to browbeat them with a custom prompt that tells them to stop doing all these dumb, basic things.

        Even if you tell them in conversation " hey you did this wrong, heres how it actually works ", it doesnt matter, keep that conversation going and they will forget it and repeat the mistake… you have to have it contstantly present in the prompt.

        The amount of babysitting and constantly telling an LLM the number of errors it is making is quite substantial.

        It can be a thing that makes some sense to do in some situations, but it is extremely, extremely far away from ‘Make a game for me in Godot’, or even like ‘Make a third person camera script’.

        You have to break things down into much, much more conceptually smaller chunks.

      • CameronDev@programming.dev
        link
        fedilink
        English
        arrow-up
        20
        ·
        12 hours ago

        They can fork Godot and let their LLMs go at it. They don’t have to use the Godot human maintainers as free slop filters.

        But of course, if they did that, their LLMs would have to stand on their own merits.