• frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 hours ago

        I believe it’s just complexity and token/compute usage.

        You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

        It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

      • magnetosphere@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 hour ago

      That doesn’t seem like a solvable thingy.
      People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

      • magnetosphere@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

        If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.