I’m not completely opposed to people using large models as a writing aid, specially for technical stuff. However: if you’re publishing a scientific paper, it’s your job to check it twice, thrice, tenice ten times, to ensure all info there is accurate, factual, and well grounded. If you can’t do it, you shouldn’t be littering the commons with your rubbish.
That applies even to references. In fact, references are part of the process: knowledge isn’t born ex bloody nihilo dammit, people should be able to check the references of your paper and find earlier literature about the subject.
It’s absolutely mad that LLM hallucinations are “socially accepted,” and that the population seems to be kept ignorant of them. It’d be like not requiring a license to drive cars, and completely obscuring anything about how they work, locking steering behind a corporate subscription connection, and then calling running people over a “cost of advancement; the next car will be better, we promise!”
…I know why.
Education would reduce engagement. Big Tech can’t have that.
But still. It’s mad. These text models should be presented as primitive aids, like they were designed to be. Not freaking do-anything magic lamps.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.
[Sorry for the double reply]
*slow clap*
I’m not completely opposed to people using large models as a writing aid, specially for technical stuff. However: if you’re publishing a scientific paper, it’s your job to check it twice, thrice,
teniceten times, to ensure all info there is accurate, factual, and well grounded. If you can’t do it, you shouldn’t be littering the commons with your rubbish.That applies even to references. In fact, references are part of the process: knowledge isn’t born ex bloody nihilo dammit, people should be able to check the references of your paper and find earlier literature about the subject.
It’s absolutely mad that LLM hallucinations are “socially accepted,” and that the population seems to be kept ignorant of them. It’d be like not requiring a license to drive cars, and completely obscuring anything about how they work, locking steering behind a corporate subscription connection, and then calling running people over a “cost of advancement; the next car will be better, we promise!”
…I know why.
Education would reduce engagement. Big Tech can’t have that.
But still. It’s mad. These text models should be presented as primitive aids, like they were designed to be. Not freaking do-anything magic lamps.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.