• lad@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    I have a feeling that their test case is also a bit flawed. Trying to get index_value instead of index value is something I can imagine happening, and asking an LLM to ‘fix this but give no explanation’ is asking for a bad solution.

    I think they are still correct in the assumption that output becomes worse, though

    • VoterFrog@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      It just emphasizes the importance of tests to me. The example should fail very obviously when you give it even the most basic test data.

        • VoterFrog@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          10 hours ago

          This isn’t even a QA level thing. If you write any tests at all, which is basic software engineering practice, even if you had AI write the tests for you, the error should be very, very obvious. I mean I guess we could go down the road of “well what if the engineer doesn’t read the tests?” but at that point the article is less about insidious AI and just about bad engineers. So then just blame bad engineers.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            Yeah, I understand that this case doesn’t require a QA, but in the wild companies seem to increasingly think that developers are necessary (yet), but QA are surely not

            It’s not even bad engineers, it’s just squeezing of productivity as dry as possible, as I see it