“AI is useless because I want it to be so” is jedi logic, not applicable irl
AI can be a useful tool.
Just in case it slipped your eyes when you saw “Torvalds”: this is not about the Linux kernel.
AI is fine when it is your hands. Not when it is your brain. As long as you can vet it it can be useful for coding. Its the moment we give agency away to the AI that many see issues with it.
Quick, let’s all abandon Linux (edit: and git) because the main developer did something we don’t like! /s
I would say hans reiser enters the room… But he didn’t…
The problem is that AI is not useless. It has a lot of other issues, but not that it is never a helpful tool.
With constrained data sets it’s actually really useful.
Parsing text and logs and correlating events, super useful.
When you dump all human “intelligence” into it you discover how dumb we are collectively.
If 1) you’re smart or practised enough to be able to generate what you’re asking the AI to do for yourself, 2) you’re able to take what the AI generates and debug, check and correct it using non-AI tools like your own brain, 3) you’re sure this whole AI-inclusive process will save time and money, and 4) you’re sure using AI as a crutch won’t cause you brain-rot in the long term, go nuts.
Caveat: Those last two are tricky traps. You can be sure and wrong.
Otherwise, grab the documentation or a bunch of examples and start hacking and crafting. Leave the AI alone. Maybe ask it a question about something that isn’t clear, but on no account trust it. It might have developed the same confusion that you have for precisely the same reasons.
So anyway, Linus clearly fits 1 and 2, and believes 3 and 4 or else he wouldn’t be using an AI. Let’s just hope he hasn’t fallen into the traps.
I’ve long found it funny how some people claim that generative AI produces terrible slop, and simultaneously that it’s a huge threat to their jobs.
The people who make firing decisions often aren’t the ones doing the day-to-day work.
It’s very possible to be replaced by a machine that does a worse job, as long as your (ex-) boss isn’t aware of it.
Or sometimes they are aware, but it’s enough cheaper that they don’t care.
Schrodinger’s
immigrantAi

From the project’s README:
Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters – and that’s not saying much – than I do about python. It started out as my typical “google and do the monkey-see-monkey-do” kind of programming, but then I cut out the middle-man – me – and just used Google Antigravity to do the audio sample visualizer.
This is the commit: https://github.com/torvalds/AudioNoise/commit/93a72563cba609a414297b558cb46ddd3ce9d6b5
Tbf it’s his project so he can do whatever he wants
Issue is when people do things like that one dude who had Claude implement support for DWARF in… Whatever language it was (Something MLy I think?) and literally didn’t even remove the copyright attribution to some random 3rd person that Claude added. It was a PR of several thousand lines, all AI generated and he had no idea how it worked, but said it’s ok, Claude understands it. He didn’t even tell anyone he was going to be working on it so there was no discussion of the architecture beforehand.
Edit: Ocaml. So I was right that it was something MLy lol
Claude understands it
Only the words of it tho.
Only the probability of the next token after tokenisation of it.
Not even that.
I wonder how much Google paid him.










