…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • ThirdConsul@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    3 days ago

    .net runtime after 10 months of using and measuring where LLMs (including latest Claude models) shine reported a mindboggling success rate peaking at 75% (sic!) for changes of 1-50 LOC size - and it’s for an agentic model (so you give it a prompt, context, etc, and it can run the codebase, compile it, add tests, reason, repeat from any step, etc etc).

    Except it was clearly bullshit because it didn’t work.

    Welcome to the LLMs where everything is hallucinated and correctness doesn’t matter.

    Is anyone having success with these tools

    Define success.

    Is there a special way to prompt it?

    It gets better the more you use it, you will learn what works for you, and what does not. Right now the hot shit is “autonomous agent swarms” peddled by the token sellers as a way to output correct massive features. Do not touch that for now.

    What helps with Claude / llms 101:

    • when it tells you something about an API, using a tool or whatever, tell it tool version and order it to give you documentation page proving the solution is possible.

    • when it oneshots a working solution you will get a dopamine hit. Be aware of that, as it can be addictive or make you trust it. Do not trust it, it sucks long term.

    • it will alwyas default to below average solution. Know where your hotspots are, and be extra judgy there.

    • it will get lazy and lie to you, especially with tests

    • it will not propose code refactors on its own.

    • despite the token peddlers claims, no matter if your using the 1M token context window model, the shit degrades when the context window is over 20k-30k tokens - so switch context windows often for better outcomes, but that means you will be burning more money - which obviously benefits the token peddlers.

    • do not trust the hype - so far any and all tall claim of a breakthrough from the token peddlers were a lie (e.g. vibing working os that can run Doom, vibing a next.js 96% replacement in a week, vibing a browser, compiler, vibing a browser jailbreak via Mythos)

    Would I get better results during certain hours of the day?

    Afaik USA timezone has worse performance.