• HakFoo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    23
    ·
    1 day ago

    One huge issue is that LLMs do weird and stupid things differently than how humans do them.

    If you’ve developed an eye for reading human-made changes, you’re not necessarily going to recognize new and surprising failure modes as easily. It’s literally harder than regular code review.

    Humans with modern tooling, for example, rarely hallucinate field/class/method/object names because non-spicy autocomplete keeps them on the rails. LLMs seem much more willing to decide the menu bar is .menuBar and not .topMenu, probably because their training corpus is full of the former.

    • aivoton@sopuli.xyz
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Exactly.

      Another problem with LLMs is that they are actually useful in some tasks and they can generate very good quality code if you’re diligent enough developer. I also have built personal tools with them, but I don’t have the knowledge of the code the LLM has hallucinated which means that before I would push this code forward I will have to basically familiarise myself with the code in a way how a code review works.

      The knowledge you gain from this is also different from that of actually writing and running the code yourself. I have seen people who use LLMs to write commit messages which is the last thing you should do. Commit messages are probably the only places were we can meaningfully store the knowledge gathered during development and the more I see LLM commits the more I lose hope.