- cross-posted to:
- privacy@programming.dev
- cross-posted to:
- privacy@programming.dev
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.



Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.