…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • f3nyx@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    hey shaggy. I want to touch on your last point as a newer developer:

    My department is finally seeing 10x development due to the shift of writing code to writing spec. The main issue is now our pipeline is stuck at review, so all that extra output is effectively wasted. Do you have any tips on what worked for you if you had a similar situation?

    • shaggy@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      This is our new bottleneck too. Developers roles are shifting to spec writers and code reviewers more and more. I don’t think I’d call this wasted effort though (unless the code produced is worse than what developers would have produced otherwise). I’d think of it as a good problem to have.

      We’re doing several things to alleviate this, and I’m genuinely curious how other teams are handling this too.

      • We have Claude running code reviews on our PRs too 😄. In our department, a PR isn’t expected to be reviewed by a dev until the author has addressed or reviewed and dismissed all of the issues Claude has brought up.
      • There is pressure for developers on our team to become better reviewers. I think this is good, because reviewing code is a more valuable skill to prospective employers than writing it is anyway.
      • f3nyx@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        thanks for the response. for what its worth, most people I ask this question to are attempting some form of your first bulletpoint. I think we’re on the right track there, it only makes sense.

        speaking for myself, your second point is the silver lining of all this, to me. ive never had this kind of pressure before, but I hope that its the kind of pressure that makes me a better dev instead of burning me out.

        cheers!

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      If you’re stuck at review you aren’t seeing 10x development, you’re seeing 10x code generation.

      This is especially important because without the review/test/deploy part of the pipeline you aren’t actually seeing any progress towards business goals.

      Once you do get these parts sorted, you can then look at what multiplier you’re seeing.

      That’s not to say there isn’t an improvement in your workflow, just that you can’t say with any certainty what kind of improvement without measuring the end to end.

      It might turn out that the rest of the pipeline is way easier , in which case your multiplier will be higher, it might also be much harder, in which case the multiplier will be lower.

      I’m not taking shots, i mean it seriously, especially if you need to report any of this to the rest of the business.


      edit : In addition, if it turns out that review is going to be a bottleneck you can get extra resource pointed in that direction which will benefit the workflow overall.

      another edit: i would consider correctly managing the expectations of those you report to as a vital skill.

      • Dangerhart@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Exactly this. My experience with our companies wrapper on Claude lines up with OP, not this comment thread.

        Everyone seems to forget everything you write is a liability. You can’t have bugs in code that is never written or generated, comments that don’t exist never become inaccurate, not duplicating “knowledge” into a repo doesnt have a risk of not aligning with business goals long term as they change.

        From what I’ve seen, people claiming a “10x increase” did not have a strong foundation to begin with and/or did not utilize tools like IDEs effectively. No offense to thread OP, which seems itself a generated response, but in the time he has done all of that a strong engineer would be long done. Everything listed should be done before ever getting into code along with business and product partners.

        • zbyte64@awful.systems
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Everything listed should be done before ever getting into code along with business and product partners.

          Ehh, it really depends on where the risk is and the problem is LLMs can’t evaluate for that unless you feed it everything. Some projects need code experiments before you settle on an architecture, but that’s only if you’re a pioneer (which frankly is where the money is at).

      • f3nyx@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        that’s a very good distinction, absolutely. its just code generation at this stage.

        the review was the bottleneck before (as I believe was already the case for many companies) but now with 10x the code generated for review, the bottleneck has turned into a dripping faucet.