I can’t wait until billionaires realize how worthless they actually are without people doing everything for them
They will never realize that, they will blame any failures on others naturally. They truly believe they are better than everyone else, that their superior ability led them to invest in a company that increased in value enough for them to become filthy rich.
Surrounded by yes men and woman that agree with everything they say and tell them what a genius they are. Of course any ill outcome isn’t their fault.
Wouldn’t hold my breath for it.
I had a meeting with my boss today about my AI usage. I said I tried using Claude 4.5, and I was ultimately unimpressed with the results, the code was heavy and inflexible. He assured me Claude 4.6 would solve that problem. I pointed out that I am already writing software faster than the rest of the team can review because we are short staffed. He suggested I use Claude to review my MRs.
The trick is to tell them you’ve been using it more than they have and that it’s not as good as chatGPT for task A, but that for task B claude does okay 25% of the time so we’ll need to 4x the timeline in order to get a good claude output based on that expected value.
But not as good as your personal local LLM that you’ve been training on company data. No one else can use it because it’s illegal to clone. (your personal local LLM is your brain)
At work today we had a little presentation about Claude Cowork. And I learned someone used it to write a C (maybe C++?) compiler in Rust in two weeks at a cost of $20k and it passed 99% of whatever hell test suite they use for evaluating compilers. And I had a few thoughts.
- 99% pass rate? Maybe that’s super impressive because it’s a stress test, but if 1% of my code fails to compile I think I’d be in deep shit.
- 20k in two weeks is a heavy burn. Imagine if what it wrote was… garbage.
- “Write a compiler” is a complete project plan in three words. Find a business project that is that simple and I’ll show you software that is cheaper to buy than build. We are currently working on an authentication broker service at work and we’ve been doing architecture and trying to get everyone to agree on a design for 2 months. There are thousands of words devoted to just the high level stuff, plus complex flow diagrams.
- A compiler might be somewhat unique in the sense that there are literally thousands of test cases available - download a foss project and try to compile it. If it fails, figure out the bug and fix it. Repeat. The ERP that your boss wants you to stand up in a month has zero test coverage and is going to be chock full of bugs — if for no other reason than you haven’t thought through every single edge case and neither has the AI because lots of times those are business questions.
- There is not a single person who knows the code base well enough to troubleshoot any weird bugs and transient errors.
I think this is a cool thing in the abstract. But in reality, they cherry picked the best possible use case in the world and anyone expecting their custom project is going to go like this will be lighting huge piles of money on fire.
I also often get assigned projects where all the tests are written out beforehand and I can look at an existing implementation while I work…
Also, software development is already the best possible use case for LLMs: you need to build something abiding by a set of rules (as in a literal language, lmao), and you can immediately test if it works.
In e.g. a legal use case instead, you can jerk off to the confident sounding text you generated, then you get chewed out by the judge for having hallucinated references. Even if you have a set of rules (laws) as a guardrails, you cannot immediately test what the AI generated - and if an expert needs to read and check everything in detail, then why not just do it themselves in the same amount of time.
We can go on to business, where the rules the AI can work inside are much looser, or healthcare, where the cost of failure is extremely high. And we are not even talking about responsibilities, official accountability for decisions.
I just don’t think what is claimed for AI is there. Maybe it will be, but I don’t see it as an organic continuation of the path we’re in. We might have another dot com boom when investors realize this - LLMs will be here to stay (same as the internet did), but they will not become AGI.
Don’t forget that there are tons of C compilers in the dataset already
A C compiler in two weeks is a difficult, but doable, grad school class project (especially if you use
lexandyaccinstead of hand-coding the parser). And I guarantee 80 hours of grad student time costs less than $20k.Frankly, I’m not impressed with the presentation in your anecdote at all.
Agree with all points. Additionally, compilers are also incredibly well specified via ISO standards etc, and have multiple open source codebases available, eg GCC which is available in multiple builds and implementations for different versions of C and C++, and DQNEO/cc.go.
So there are many fully-functional and complete sources that Claude Cowork would have pulled routines and code from.
The vibe coded compiler is likely unmaintainable, so it can’t be updated when the spec changes even assuming it did work and was real. So you’d have to redo the entire thing. It’s silly.
Updates? You just vibecode a new compiler that follows the new spec
“Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”
Yep.
Management is often out of touch and full of shit
Management: “No, that doesn’t work, because employees spend so much time doing the actual work that they lack the vision to know what’s good for them. Luckily for them I am not distracted by actual work so I have the vision to save them by making them use AI.”
You wanna know who really bags on LLMs? Actual AI developers. I work with some, and you’ve never heard someone shit all over this garbage like someone who works with neural networks for a living.
There’s this great rage blog post from 1.5 years ago by a data scientist
That’s me, but for QA…
Man, corporate layoffs kill productivity completely for me.
Once you do layoffs >50% of the job becomes performative bullshit to show you’re worth keeping, instead of building things the company actually needs to function and compete.
And the layoffs are random with a side helping of execs saving the people they have face time with.
Who?
The original creator of Twitter and now creator of Bluesky and whatever this thing that’s falling off the rails is.
Basically another billionaire living in his own little bubble and huffing his own farts too much.
he left Bluesky around 2 years ago
That must be why they are doing okay, haha.
He also had a lot to do with Nostr, early on.
Jack Dorsey, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023,[13][15] as well as a $10 million cash donation to a Nostr development collective in 2025.
What?
Oops I mistread my source. Have updated my comment.
Is that thumbnail a scene from 12 monkeys?
Naw. This is clearly just 1 monkey.
pffft, you give him too much credit.
Right before he dies, yeah
Uhhh, Block is the the parent company of Square (formerly known as Square Up). This is actually a huge company, not some little side thing.
Gonna need a longer beard.












