• kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 day ago

    I’m afraid it’s worse than just moving the bottleneck to review/due-diligence.

    C-suits don’t like to admit this, because it challenges the assumption that job roles are all about input and output with no messy internal state to worry about, but…

    When you write some code, the code isn’t the only thing that you make. You also develop a mental model of how the code works, how you wish it would work, the distance between those two, and some sense of the possible paths that could get you there.

    That mental model is ultimately the more important part for the long-term health of the project. Coding is more an activity of communication between people; having an artifact that tells the computer what to do is almost an incidental side-effect of successful communication.

    • moto@programming.devOP
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      Yeah that’s a real good point. I focused a lot on the short term issues of agentic slop but you’re right the long term impact of this is going to be staggering.

      That mental model is ultimately the more important part for the long-term health of the project. Coding is more an activity of communication between people; having an artifact that tells the computer what to do is almost an incidental side-effect of successful communication.

      100%. Something I wanted to touch on in my post but cut because I couldn’t weave it in well was more on the relationship with “The Problem” and “The Code”. Pre-AI coding acted as a forcing function for the problem context. It’s really hard to effectively build software without understanding what you’re ultimately driving towards. With agentic coding we’re stripping that forcing function out.

      Institutional knowledge is already something that’s been hard to quantify and value by C-suites and now you’re ripping out the crucial mechanism for building that.

      I see a lot of memes with people being like “in 5 years we’re just gonna press yes and not understand what the agent is doing” and I keep thinking “why do people think this is funny?”

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        I understand why you cut it. There is a lot that you could dig into there, and you would really have to go into detail it you wanna convince the suits that can only think in terms of KPIs.

        The forcing function is a good way of looking at it. We’re not really doing the same thing but faster. We’re doing less, and that naturally takes less time. AI made understanding optional.

        I kinda wonder to what extent coding is actually a domain that AI is uniquely “good at” vs simply how well coding activities are able to survive on artifacts which are produced but not understood.

        It seems like something we were already exceptionally capable of tolerating, compared to other sectors.