- cross-posted to:
- privacy@programming.dev
- cross-posted to:
- privacy@programming.dev
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.



Don’t tell me people are using llms to generate passwords
I know someone who generate passwords in AI chat
Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
Despair
Very well. If you don’t want me to tell you the truth about people using LLMs to make passwords, I won’t.
Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.
I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.