[Sorry for the double reply]
*slow clap*
I’m not completely opposed to people using large models as a writing aid, specially for technical stuff. However: if you’re publishing a scientific paper, it’s your job to check it twice, thrice,
teniceten times, to ensure all info there is accurate, factual, and well grounded. If you can’t do it, you shouldn’t be littering the commons with your rubbish.That applies even to references. In fact, references are part of the process: knowledge isn’t born ex bloody nihilo dammit, people should be able to check the references of your paper and find earlier literature about the subject.
It’s absolutely mad that LLM hallucinations are “socially accepted,” and that the population seems to be kept ignorant of them. It’d be like not requiring a license to drive cars, and completely obscuring anything about how they work, locking steering behind a corporate subscription connection, and then calling running people over a “cost of advancement; the next car will be better, we promise!”
…I know why.
Education would reduce engagement. Big Tech can’t have that.
But still. It’s mad. These text models should be presented as primitive aids, like they were designed to be. Not freaking do-anything magic lamps.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.
[Justin Angel] My guess is that this policy will be applied selectively depending on institutional privilege and personal notoriety. It’ll end up as a tool of silencing unconnected individuals vs. promoting better scientific discourse. // I aspire to be wrong.
Doesn’t it sound weird that someone immediately assumes (i.e. makes up) the policy will be implemented unfairly, as soon it is announced? Well, if you check their profile, it doesn’t:
Justin Angel // @JustinAngel // AI, iOS & Android dev. Worked at Meta, Uber, Amazon, Apple, and Microsoft building apps, developer platforms, and hardware. Tweeting about LLM psychotherapy.
Plus almost all of their tweets are about LLMs. This stinks “competing interests” from a distance.
Good. If you can’t be bothered to read your own papers, you shouldn’t be in academia.
Weak ass penalty
It’s actually harsher than it looks like: “followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.”
This means that, for all intents and purposes, if you get caught by this policy, you’re permanently banned from submitting preprints to arXiv, even if those are the main appeal of the repository.



