• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    8 hours ago

    According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.

    Producing innaccurate technical advice, with a confident tonse, at scale.

    If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.

      • Hirom@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        That’s a good way to represent LLMs. Very bad and very prolific consultants.

  • GregorGizeh@lemmy.zip
    link
    fedilink
    arrow-up
    56
    ·
    15 hours ago

    “Rogue AI” as if it’s some sentient evil thing when its just a llm with too many permissions… This timeline is so dystopian, but simultaneously incredibly lame i hate it.

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      It’s also a pretty big exaggeration of what actually happened which is that it generated and posted some technically inaccurate information.

    • Hirom@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      8 hours ago

      It shows LLMs can do significant harm without the capabilities of an AGI.

      Overhyping LLMs and overinflating their capabilities makes things worse, as people are less skeptical of LLM output.