An ai apocalypse won’t come from an ai becoming sentient, but from some idiot putting ai where it shouldn’t be.
“We installed Gemini into all US nuclear silos.”
According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.
Producing innaccurate technical advice, with a confident tonse, at scale.
If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.
Wait til this starts happening in the construction industry.
That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.
That’s a good way to represent LLMs. Very bad and very prolific consultants.
“Flagrant security lapse caused an incident when software engineer uses inappropriate tool for the job.”
“Inappropriate tool also weirdly good at gas lighting engineers and managers”
“Rogue AI” as if it’s some sentient evil thing when its just a llm with too many permissions… This timeline is so dystopian, but simultaneously incredibly lame i hate it.
It’s also a pretty big exaggeration of what actually happened which is that it generated and posted some technically inaccurate information.
It shows LLMs can do significant harm without the capabilities of an AGI.
Overhyping LLMs and overinflating their capabilities makes things worse, as people are less skeptical of LLM output.






