• frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).

    • Truscape@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      I believe it’s just complexity and token/compute usage.

      You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

      It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

    • magnetosphere@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.