• 0 Posts
  • 645 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle





  • Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…

    In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:

    Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.

    While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…

    Lots of human slop out there that AI can actually compete with.


  • I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.

    I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

    Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…

    Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…



  • I think I’ll need a citation, from what I can find, the LFP chemistry still is more dense than CATL sodium, which makes sense because, well, the physics are what they are, sodium is about three times more massive than lithium. The best argument I could see on this point is debating whether there’s a space in the market between sodium and NMC for LFP (if you are already compromising on density, then what’s another further compromise to get the other qualities you mention for sodium).









  • I’ll say “vibe coding” to me implies the operator has zero awareness of the actual code, and there is something wrong.

    They treat the actual program logic in the same way folks treat assembly code as some arcane black magic they don’t have to think about. Problem is the tooling is not nearly so deterministic as a compiler, and the output is just too bad to be relied upon.

    For certain clasesses of tasks, it may do a serviceable job, maybe at first. If you have ongoing evolution requirements, it can dig itself a whole that it can’t really dig out of. It can’t process the code that had been generated to extrapolate a code change to match the change request.

    The GenAI coding needs supervision, and ‘vibe coding’ implies opting out of careful supervision.


  • This is just so fitting.

    I keep getting merge requests now from people that their whole job to date had been too scared by the syntax to try coding.

    It’s almost always a shotgun of way too many lines of code changed for a small thing, often with horrible side effects that would be unacceptable.

    Someone wanted to tweak the CSS layout of one element, what should have been a one line change. The pull request had hundreds of css changes, basically touching everything. Clearly the model had started changing things and he kept saying it didn’t do it yet until finally it did and it never rolled back anything it did, including many of the rules being repeated 5 times in a row in the same place…

    They felt like AI was making them so helpful because they could submit a code change directly instead of just asking for what they want. They proudly said “AI told me:” and then explain the brilliance of the AI finding. One time the AI finding was addressed over 6 months prior, the AI never thought to update the software, but instead proposed a really crap workaround that would have failed to cover a whole class of similar scenarios while simultaneously imposing crazy side effects on scenarios that weren’t tested.

    I can use AI too, please just send me what you would have sent to the AI, and if AI can do it, I could use the AI. If you think the AI will figure out how you are using something wrong and don’t want to bother/wait for a human to help, fine, but if it gets to what it thinks is a software bug, just rewind and start from your problem statement when you come to me…



  • Sometimes it just doesn’t pan out.

    Had a junior dev that basically decided he would rather try to grift through instead of doing the job. Never seen someone work so hard at trying not to work at all. Every day it was a different excuse, a different other person to point to as to why he didn’t even try to do anything that day. I think at least 7 or 8 of his grandmothers died during his tenure. And management ate it up.

    Until one day he lost track of things and blamed the manager asking him why things weren’t done. Said the manager never sent him some material and of course the manager had. Suddenly the manager believed the rest of us who had been saying he was lying for the last many months…

    The key was he was cheap and was in theory supposed to be as good as a higher paid alternative, so management would have to admit to being wrong to ditch him…