• melroy@kbin.melroy.org
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

    • Cort@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.

      • snugglesthefalse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.

        • melroy@kbin.melroy.org
          link
          fedilink
          arrow-up
          1
          ·
          23 hours ago

          Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.

          It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).