Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.
I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code.
Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.
I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”
Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.
Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.
Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.
Nowadays people use OpenClaw agents which don’t really involve human input beyond the initial “fix this bug” prompt. They independently write the code, submit the PR, argue in the comments, and might even write a hit piece on you for refusing to merge their code.
I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
Can Cloudflare help prevent this?
You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”
Not good enough, you need to poison the data
I don’t want my data poisoned, I’d rather just poison the AI bros.
Yeah but then their Facebook accounts will keep producing slop even after they’re gone.
Tempting, but even that is not good enough as another reply pointed out
the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data
LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.
They’re not going to maintain it - they’ll just throw it back to the LLM and say “enhance”.
Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.
Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.
yes.
literally yes.
It’s insane
That’s how you know who never even tried to run the code.