• pingveno@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    11 hours ago

    Wikipedia had to deal with this same sort of thing with the biography of living persons policy after vandalism of John Seigenthaler’s article included defamatory material in 2005. With many people coming to rely on AI summaries at least for first impressions, AI companies need to lose some lawsuits until they fix their software.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      10 hours ago

      Bummer for them since it can’t be fixed. “Hallucinations” aren’t an anomaly for LLMs, it’s how they function. The algorithm makes everything up, all the time. It’s only called out when it’s obvious. To stop one of these chatbots from giving made-up information would fundamentally disable it.

      That said, they should absolutely be held accountable. They’re pushing this shit as if it’s a thinking, speaking human mind. They shouldn’t get to wipe their hands of it when it causes harm.

    • inari@piefed.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      I wonder how plaintiffs can prove this sort of thing in court given that AI outputs are non-deterministic. Surely a screenshot won’t be enough, either.