LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.

  • guy@piefed.social
    link
    fedilink
    English
    arrow-up
    19
    ·
    8 hours ago

    Welp, that’s another thing I hadn’t thought you could use llms for and another thing I would never do

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:

      Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      38
      ·
      12 hours ago

      People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones

      Despair

    • Steve@communick.news
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      Very well. If you don’t want me to tell you the truth about people using LLMs to make passwords, I won’t.

    • KazuyaDarklight@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 hours ago

      The problem is that in this case, the LLM just naively auto-completes a password from what it knows a password to most likely look like.

      It is possible to enable an LLM to call external tools and to provide it with instructions, so that it’s likely to auto-complete the tool call instead. Then you could have it call a tool to generate a correct horse battery staple, or a completely random password by e.g. calling the pwgen command on Linux.

      But yeah, that just isn’t what this article is about. It’s specifically about cases where an LLM is used without tool calls and therefore naively auto-completes the most likely password-like string.

      • Melobol@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        I’m kinda interested how nany accounts you could log in with those strings :D

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    12 hours ago

    LLM-generated passwords

    This is akin to asking Karen from accounting to generate a password for you, and trusting that it will be a true random and secure password and that she won’t yap about it to everyone.

    That statement is one of the painfully dumbest things I’ve read in my life, and I’ve read the bible.

  • mr_anny@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    12 hours ago

    Not again this horrible web design/engine whatever laggin/stuttering the way it almost induce a epileptic seizure in me.