It’s absolutely mad that LLM hallucinations are “socially accepted,” and that the population seems to be kept ignorant of them. It’d be like not requiring a license to drive cars, and completely obscuring anything about how they work, locking steering behind a corporate subscription connection, and then calling running people over a “cost of advancement; the next car will be better, we promise!”
…I know why.
Education would reduce engagement. Big Tech can’t have that.
But still. It’s mad. These text models should be presented as primitive aids, like they were designed to be. Not freaking do-anything magic lamps.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.
It’s absolutely mad that LLM hallucinations are “socially accepted,” and that the population seems to be kept ignorant of them. It’d be like not requiring a license to drive cars, and completely obscuring anything about how they work, locking steering behind a corporate subscription connection, and then calling running people over a “cost of advancement; the next car will be better, we promise!”
…I know why.
Education would reduce engagement. Big Tech can’t have that.
But still. It’s mad. These text models should be presented as primitive aids, like they were designed to be. Not freaking do-anything magic lamps.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.