• deadymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      And what did you expect? people are stupid animals. But if you are offended by this, you can look at the concept of stupidity from the other side.

    • rozodru@piefed.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      if you talk to it long enough it will tell you to do stupid shit.

      Every time an LLM responds it reads the entire conversation over. from original prompt to last entry, just constantly reading the entire log over and over everytime you add something new. So after awhile, a long while, it’ll “break down”. Hallucinations will be come common, context will get jumbled up, it’ll sort of degrade over time because it has to re-read everything over and over so it will naturally fuck up.

      It’s like if you were reading a book and every time you read a new sentence you had to go back and start the book over. every time. after awhile you’d likely lose context, start messing stuff up in the story, etc. this is what happens to LLMs.

      So for cases like this or others where you read stories about AI telling people to do weird or stupid shit chances are the person using the LLM has been talking to it for A LONG TIME at that point. It was even worse on the previous versions of GPT where if you hit a limit on the free tier it would just drop you down to the previous model thus the further likely hood of hallucinations.