Bummer for them since it can’t be fixed. “Hallucinations” aren’t an anomaly for LLMs, it’s how they function. The algorithm makes everything up, all the time. It’s only called out when it’s obvious. To stop one of these chatbots from giving made-up information would fundamentally disable it.
That said, they should absolutely be held accountable. They’re pushing this shit as if it’s a thinking, speaking human mind. They shouldn’t get to wipe their hands of it when it causes harm.
Bummer for them since it can’t be fixed. “Hallucinations” aren’t an anomaly for LLMs, it’s how they function. The algorithm makes everything up, all the time. It’s only called out when it’s obvious. To stop one of these chatbots from giving made-up information would fundamentally disable it.
That said, they should absolutely be held accountable. They’re pushing this shit as if it’s a thinking, speaking human mind. They shouldn’t get to wipe their hands of it when it causes harm.