• zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 hours ago

    What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.

    I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.

        • mcv@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.

          But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

          All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.

          • uniquethrowagay@feddit.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 hours ago

            But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

            That’s the problem with LLMs in general, isn’t it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.

            They can maybe be used to get a first draft for an E-Mail you don’t know how to start. Or to write a “funny” poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It’s maddening.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.

        • anon_8675309@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.

          Don’t be a mediocre dev.

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            43 minutes ago

            Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.

            It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.

            So how would you fight the wave of useless PR’s?

        • Chais@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 hours ago

          So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.

        • porous_grey_matter@lemmy.ml
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          6 hours ago

          So what you’re saying is directly contradictory to your previous comment, in fact it doesn’t produce good code even when you tell it to.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      7 hours ago

      You’re absolutely right. I haven’t realized that I can just tell it to write good code. Thank you, it changed my life.