• flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    1 day ago

    “Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      ·
      1 day ago

      They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.

      They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.

      It’s like Google with their “side loading”. There is no such thing, it’s installing an app…

      It’s a word war. People are being manipulated.

      • architect@thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Lies require intent.

        So the AI hallucinates because it loses context. Hooked up to quantum computers you won’t have that happening. So regular people think the thing is stupid while the government has a murder AI.

        • LegenDarius@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          Why do you concur? You have a problem with “hallucinations” because it’s something humans do. This commentor wants to call them (among other things) “lies”, which implies intent and knowledge of falsehood which an LLM definitely can’t have. I’m not saying “halliconations” are super accurate but I don’t think the term is too positive and lessens the major issues LLMs have.

          • flandish@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 hours ago

            ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)

            In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.

            In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      10
      ·
      1 day ago

      Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

      • Cort@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.

        • snugglesthefalse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.

          • melroy@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            ·
            23 hours ago

            Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.

            It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).