• rbos@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    I’ve been told that the LLMs are reaching pretty hard diminishing returns, in that you need exponentially increasing amounts of compute for linear returns in model performance. The cost of marginal improvements is bumping into practical limits.

    It can’t turn into general AI, that’s not how LLMs work. So they’re uselessly throwing money after nothing.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 hours ago

      Architectural improvements could help, but the big guys can’t even get away from “basic” problems like high temperature sampling. Corporate development is way more conservative than you’d think.

      And it’s not getting better. See the “all star” ego Tech Bro teams at Meta, OpenAI and such vs. researchers that quit.

      The Chinese are testing some more interesting optimizations (and actually publishing papers on them), but still pretty conservative all things considered. They appear content with LLMs as modest coding assistants and document processors, basically; you don’t hear anything about AGI in their presentations.

    • bobalot@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      It’s like saying adding more steps to a ladder will make it fly.

      Fundamentally, LLMs are not AI.