Wikipedia had to deal with this same sort of thing with the biography of living persons policy after vandalism of John Seigenthaler’s article included defamatory material in 2005. With many people coming to rely on AI summaries at least for first impressions, AI companies need to lose some lawsuits until they fix their software.
Bummer for them since it can’t be fixed. “Hallucinations” aren’t an anomaly for LLMs, it’s how they function. The algorithm makes everything up, all the time. It’s only called out when it’s obvious. To stop one of these chatbots from giving made-up information would fundamentally disable it.
That said, they should absolutely be held accountable. They’re pushing this shit as if it’s a thinking, speaking human mind. They shouldn’t get to wipe their hands of it when it causes harm.
I wonder how plaintiffs can prove this sort of thing in court given that AI outputs are non-deterministic. Surely a screenshot won’t be enough, either.
Wikipedia had to deal with this same sort of thing with the biography of living persons policy after vandalism of John Seigenthaler’s article included defamatory material in 2005. With many people coming to rely on AI summaries at least for first impressions, AI companies need to lose some lawsuits until they fix their software.
Bummer for them since it can’t be fixed. “Hallucinations” aren’t an anomaly for LLMs, it’s how they function. The algorithm makes everything up, all the time. It’s only called out when it’s obvious. To stop one of these chatbots from giving made-up information would fundamentally disable it.
That said, they should absolutely be held accountable. They’re pushing this shit as if it’s a thinking, speaking human mind. They shouldn’t get to wipe their hands of it when it causes harm.
I wonder how plaintiffs can prove this sort of thing in court given that AI outputs are non-deterministic. Surely a screenshot won’t be enough, either.