Edit to add: I also found someone who recorded a voice chat of the same thing. This isn’t that someone uploaded a song, or that AI didn’t actually process the file. These models really are this sycophantic:

https://m.youtube.com/shorts/JqvDLHshTtI

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    98
    ·
    11 hours ago

    RLHF was a fundamental mistake. Human feedback almost always trains an AI to be sycophantic because humans in general are super easy to flatter.

    We are building the perfect addiction machine, far more powerful than social media is, and it actively undermines the honesty of the system.

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      6 hours ago

      I find it interesting that almost all the beloved AI characters in sci-fi have personalities ranging from ‘a little bit snarky’ to ‘raging asshole’. Given the tendency of media to influence to aesthetics of actual tech products that follow, ten years ago I would have predicted that an AI assistant would be given a personality along the lines of Cortana (Halo) or Jarvis (iron man). But somehow half a dozen companies in fierce competition with each other all decided that the right move was to go with more-sycophantic-c3p0.

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Yeah… Don’t know that it has much to do with what people want, but it does show what the billionaires controlling these projects respond well to

    • Holytimes@sh.itjust.works
      link
      fedilink
      arrow-up
      51
      ·
      11 hours ago

      I kind of want to see a llm trained on nothing but people who hate being flattered and rather give death threats then accept ANY form of praise

      The absolute unhinged result might be enough to finally show people that ai is in fact dumb as rocks.

      • becausechemistry@piefed.social
        link
        fedilink
        English
        arrow-up
        40
        ·
        10 hours ago

        My plan, if I’m ever forced to use it for work or whatever, is to have a Claude.md file that says stuff like

        • you are not my friend
        • you are not a person
        • you aren’t even a “you”
        • you are a weighted random number generator built with plagiarism
        • do not ever, EVER, pretend otherwise
        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          58 minutes ago

          Training an LLM on League would surely set us down the road to Skynet and eventually Terminators.

        • Jesus_666@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          8 hours ago

          An LLM trained on the PoE ingame chat would try to solve all my problems by asking to buy my Kaom’s Sign Coral Ring for 1ex.

    • chunes@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      I find that it does a decent job at not being a yes man if you specifically ask it to be critical, cut the crap, etc.