• nymnympseudonym@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    stochastic parrot

    A phrase that throws more heat than light.

    What they are predicting is not the next word they are predicting the next idea

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 hours ago

      How it functionally works, its the next word / token / chunk a lot more than its an “idea”. An idea is even rough to define

      The other relatively accurate analogy is a probabilistic database

      Neither work if you’ve fallen into anthropomorphising, but they’re relatively accurate to architecture and testing for people that aren’t too computer literate, far more than the anthropomorphising alternatives at least

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      throws more heat than light

      Thanks, I haven’t heard this phrase before, but it feels quite descriptive :)

    • ageedizzle@piefed.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      20 hours ago

      Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

      • affenlehrer@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).