• MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 hours ago

      Seems like it’s a technical term, a bit like “hallucination”.

      It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.

      There’s hallucination, when a model “genuinely” claims something untrue is true.

      This is about how a model might lie, even though the “chain of thought” shows it “knows” better.

      It’s just yet another reason the output of LLMs are suspect and unreliable.

    • Cybersteel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.

      Paradise Lost.