I also think there still hasn’t been a study showing consistent long term significant(!) productivity gain for coders. (Other than lines of code in total, but that alone is a poor measure.) The amount of new hidden bugs and other issues seem to outweigh most of the perceived gains.
The key question is if total costs along the pipeline, from requirements definition down to the final quality controlled fully debugged product can be reduced, at real LLM costs (not with the currently vastly subsidised costs).
I also think there still hasn’t been a study showing consistent long term significant(!) productivity gain for coders. (Other than lines of code in total, but that alone is a poor measure.) The amount of new hidden bugs and other issues seem to outweigh most of the perceived gains.
I would argue that adding lines of code is the worst thing a developer can do.
The key question is if total costs along the pipeline, from requirements definition down to the final quality controlled fully debugged product can be reduced, at real LLM costs (not with the currently vastly subsidised costs).
I agree. And from the data I’ve seen so far, it doesn’t look convincing at all.
In part because AI seems to be phenomenally unintelligent.