

[Sorry for the double reply]
*slow clap*
I’m not completely opposed to people using large models as a writing aid, specially for technical stuff. However: if you’re publishing a scientific paper, it’s your job to check it twice, thrice, tenice ten times, to ensure all info there is accurate, factual, and well grounded. If you can’t do it, you shouldn’t be littering the commons with your rubbish.
That applies even to references. In fact, references are part of the process: knowledge isn’t born ex bloody nihilo dammit, people should be able to check the references of your paper and find earlier literature about the subject.








I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, it tailors its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.