LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 hours ago

    The problem is that in this case, the LLM just naively auto-completes a password from what it knows a password to most likely look like.

    It is possible to enable an LLM to call external tools and to provide it with instructions, so that it’s likely to auto-complete the tool call instead. Then you could have it call a tool to generate a correct horse battery staple, or a completely random password by e.g. calling the pwgen command on Linux.

    But yeah, that just isn’t what this article is about. It’s specifically about cases where an LLM is used without tool calls and therefore naively auto-completes the most likely password-like string.

    • Melobol@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      I’m kinda interested how nany accounts you could log in with those strings :D