• thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    I like writing code myself, its a process I enjoy. If the LLM write it for me, then I would only do the worse part of the job: debugging. Also for many people let the Ai write code means less understanding. Otherwise you could have written it yourself. However there are things when the Ai is helpful, especially for writing tests in a restrictive language such as Rust. People forget that writing the code is one part of the job, the other is to depend on it, debug and build other stuff on top.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      However there are things when the Ai is helpful, especially for writing tests in a restrictive language such as Rust.

      For generating the boilerplate surrounding it, sure.
      But the contents of the tests are your specification. They’re the one part of the code, where you should be thinking what needs to happen and they should be readable.

      A colleague at work generated unit tests and it’s the stupidest code I’ve seen in a long while, with all imports repeated in each test case, as well as tons of random assertions also repeated in each test case, like some shotgun-approach to regression testing.
      It makes it impossible to know which parts of the asserted behaviour are actually intended and which parts just got caught in the crossfire.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I think maybe the biggest conceptual mistake in computer science was calling them “tests”.

        That word has all sorts of incorrect connotations to it:

        • That they should be made after the implementation
        • That they’re only useful if you’re unsure of the implementation
        • That they should be looking for deviations from intention, instead of giving you a richer palette with which to paint your intention

        You get this notion of running off to apply a ruler and a level to some structure that’s already built, adding notes to a clipboard about what’s wrong with it.

        You should think of it as a pencil and paper — a place where you can be abstract, not worry about the nitty-gritty details (unless you want to), and focus on what would be right about an implementation that adheres to this design.

        Like “I don’t care how it does it, but if you unmount and remount this component it should show the previous state without waiting for an HTTP request”.

        Very different mindset from “Okay, I implemented this caching system, now I’m gonna write tests to see if there are any off-by-one errors when retrieving indexed data”.

        I think that, very often, writing tests after the impl is worse than not writing tests at all. Cuz unless you’re some sort of wizard, you probably didn’t write the impl with enough flexibility for your tests to be flexible too. So you end up with brittle tests that break for bad reasons and reproduce all of the same assumptions that the impl has.

        You spent extra time on the task, and the result is that when you have to come back and change the impl you’ll have to spend extra time changing the tests too. Instead of the tests helping you write the code faster in the first place, and helping you limit your tests to only what you actually care about keeping the same long-term.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        It’s actually the first time I used to do Ai assisted unit test creation. There were multiple iterations and sometimes it never worked well. And the most important part is, as you say, think through and read every single test case and edit or replace if necessary. Some tests are really stupid, especially stuff that is already encoded in the type system through Rust. I mean you still need a head for revision and know what you want to do.

        I still wonder if I should have just gave it the function signature without the inner workings of the function. That’s an approach I want to explore next time. I really enjoyed working with it for the tests, because writing tests is very time consuming. Although I am not much of test guy, so maybe the results aren’t that good anyway.

        Edit: In about 250 unit tests (which does not cover all functions sadly) for a cli json based tool, several bugs were found thanks to this approach. I wouldn’t have done it manually.