…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    17 hours ago
    1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it’ll see warnings/hints/suggestions from the lsp

    2. Unit tests. Unit tests. Unit tests. Unit tests.

    I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

    Instead of reasoning out “it should do this” they can just run the damn test and find out.

    They’ll iterate on it til it actually works and then you can look at it and confirm if its good or not.

    I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.

    But the unit tests catch this, and it corrects.

    Example: I am working on my own fame engine with monogame and its about 95% vibe coded.

    This transform math is almost 100% vibe coded: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame/Transform/TransformRegistry.cs

    The reason its solid is because of this: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame.Tests/Transform/Integrations/TransformRegistryIntegrationTests.cs

    Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.

    And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.

    Test Driven Development is huge for making agents self police their own code.