how tf am i supposed to get any work done now?

  • applebusch@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 小时前

    In all seriousness using AI for codegen is at best shortsighted negligence. You know that problem huge long running software projects have where it becomes a nightmare to change anything? That’s some proportion of poor architectural design, lack of cleanup or refactor time, and poor understanding of the code by developers. Poor architectutal design can be repaired by cleanup and refactoring, so both of those issues end up being management/planning failures more than anything. Not understanding the codebase is much more complex. It can be caused by attrition causing loss of institutional knowledge, the code base growing faster than anyone can keep track of, the team being so large no one can stay on top of things, too much time passing since anyone has looked at or changed parts, lots of reasons. The only solution is doing a long audit and associated cleanup and refactoring. If you don’t it just takes forever to change anything because of all the knock on effects that no one can predict, meaning delays and bugs. When you use AI tools the code base grows very quickly, too quick to really comprehend, and you get shitty architecture to go along with it. You’re just speedrunning enterprise software or spending all your time reviewing slop code. It’s like a drug, the first time it does something fast and well you feel it’s so great, but it will never live up to that because it secretly sucks and can only ever suck. Best case it slows you down and you get good software at the end. Worst case you spend all your time wrestling with it and never get a finished product.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      35 分钟前

      You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?

      • cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.

      • refactor: Review this code for DRY and SSOT opportunities. Plan and implement…

      • Architectural Design - yeah, I’m not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They’re not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.

      • poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. “Me” being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.

      • management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they’re getting better) was to do some fraction of what you tell them to, then say - effectively “ALL DONE BOSS! What’s next?” What’s next is to go back to the written plan and make sure it’s complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.

      • the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.

      • too much time passing since anyone has looked at or changed parts, this is something AI agents don’t suffer from - they have “the eternal sunshine of the spotless mind” you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get “up to speed” for the new tasks at hand (which should include: maintenance of the documentation.)

      • When you use AI tools the code base grows very quickly, only if you let it.

      • too quick to really comprehend, thus: the documentation - which AI agents aren’t too bad at writing.

      • you get shitty architecture to go along with it, only when you allow it.

      I’ve seen a lot of “10x PRODUCTIVITY!!!” claims, and when you move at those speeds you’re going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it’s easier to do it by hand - in some cases it may be - but I feel like we’re at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you’re going to hear your human team bitch and moan about how “doing all that” (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn’t complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.

      • You’re just speedrunning enterprise software or spending all your time reviewing slop code.

      When’s the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

      • It’s like a drug, the first time it does something fast and well you feel it’s so great, and that’s a problem… if you’re going to party with cocaine you’re going to need some serious discipline to hold down a day job at the same time.

      • and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it “cute, interesting potential, practically useless.” 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior… Today, it’s not perfect, but it’s a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.

      • Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.