AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.

AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.

The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor also launched a drive to get millions more Britons using AI.

  • luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    7 hours ago

    HGModernism has a video about “lying” LLMs which is interesting. Basically an LLM is calibrated to find the shortest route to a an answer. It has no conception of obedience. Say you tell the LLM to use your script to solve a problem. Say the LLM will spend more energy figuring out and using your script than whipping up its own one. The LLM will therefore pretend your script is broken, generously make a new one and use that instead.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    9 hours ago

    LLMs don’t ‘scheme’, ‘plot’, or ‘deceive’, they just string together words based on complex weighted graphs.

    The fact that a so-called “AI Safety Institute” has to attempt to (actually) deceive people by falsely attributing intent or thought or awareness to LLMs is hilarious. As usual, it’s not the computers that are bad, it’s the people.

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Look, I think all the shit being ascribed to LLMs is absurd. As I’ve already said “shit.” “bullshit” feels redundant.

      Going completely off the reservation, real people can harm you far more then LLMs. Admitting you fell for that is basically like saying you thought a stripper loved you.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    13 hours ago

    So the people saying “you’re prompting it wrong” were incorrect.

    It was the AI industry with its billionaire tech wizards that have been building it wrong.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      12 hours ago

      AI can’t scheme or misbehave. The people selling it are 100% lying about what AI can do, but that doesn’t mean the people taking about how terrible AI is aren’t full of shit on occasion as well. There is so much misinformation oh both sides and combined with how strong opinions are on both sides conversation is borderline pointless.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        This is exactly what I was thinking. They aren’t programmed to follow the user’s instructions to begin with. Why is it a surprise when they deviate from them?

        It’s a fundamental misunderstanding of the ML that goes into these LLMs. They are prediction machines. They might have “specialist” submodels or whatever that are better at predicting specific areas, but that’s about it.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        11 hours ago

        I hope that goes without saying, but you’re correct. The humanizing language about AI in this article (freaking “schemes”?!) is completely cribbed from the companies making the positive misleading statements about it. Bit disappointing to see The Guardian falling for it.

        In addition to the humanization, it implies the chatbot is getting better at doing things and not worse.

    • Jul (they/she)@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 hours ago

      More like training it wrong. It is just a mimicking engine, not intelligent. If it’s trained on data that includes bad information (like the near entirety of the internet), it will periodically include that bad information.

      Also, wrong settings. Increasing the threshold of confidence in something before it presents it to the user would at least partly increase the accuracy, but also increase how often it would say it doesn’t know how to do something. And for corporate executives, admitting complete ignorance is unfathomable, so of course they don’t want their products admitting it.