- cross-posted to:
- privacy@programming.dev
- cross-posted to:
- privacy@programming.dev
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.
Welp, that’s another thing I hadn’t thought you could use llms for and another thing I would never do
Don’t tell me people are using llms to generate passwords
I know someone who generate passwords in AI chat
Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:
Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
Despair
Very well. If you don’t want me to tell you the truth about people using LLMs to make passwords, I won’t.
Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.
I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.
Why not do this…
Corect horse battery stapleMany password manager generators already do (use the “memorable” type).
pls don’t spread my password around like that
The problem is that in this case, the LLM just naively auto-completes a password from what it knows a password to most likely look like.
It is possible to enable an LLM to call external tools and to provide it with instructions, so that it’s likely to auto-complete the tool call instead. Then you could have it call a tool to generate a correct horse battery staple, or a completely random password by e.g. calling the
pwgencommand on Linux.But yeah, that just isn’t what this article is about. It’s specifically about cases where an LLM is used without tool calls and therefore naively auto-completes the most likely password-like string.
I’m kinda interested how nany accounts you could log in with those strings :D
Yes!
QuiltedNematoadNotepad486
LLM-generated passwords
This is akin to asking Karen from accounting to generate a password for you, and trusting that it will be a true random and secure password and that she won’t yap about it to everyone.
That statement is one of the painfully dumbest things I’ve read in my life, and I’ve read the bible.
Irregular ? Like my bowel movements ?
Not again this horrible web design/engine whatever laggin/stuttering the way it almost induce a epileptic seizure in me.









