- cross-posted to:
- privacy@programming.dev
- cross-posted to:
- privacy@programming.dev
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.



Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this: