- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
The publisher of the Dutch newspaper De Telegraaf and the Irish Independent has suspended one of its senior journalists after he admitted using AI to “wrongly put words into people’s mouths”.
That is an interesting portfolio.
Peter Vandermeersch, the former head of the Irish operations at Mediahuis, said he “fell into the trap of hallucinations” – the term for AI-generated errors – when using the technology.
Vandermeersch, a fellow of “journalism and society” at the European publishing group, has been suspended from his role.
The experienced journalist said he had summarised reports using AI tools such as ChatGPT, Perplexity and Google’s NotebookLM, and not checked whether the quotes from those summaries were accurate. He subsequently published them in his Substack newsletter.
The errors were highlighted by an investigation by one of Mediahuis’s own titles, NRC, where Vandermeersch had been editor-in-chief in the 2010s. NRC alleged Vandermeersch had published “dozens” of quotes that were false and that seven quoted individuals in his posts said they had not made the statements attributed to them.
Again, this is a cockroach situation; there are many more cases of such things across industries that never get caught and don’t make the news.



I don’t believe hallucination is the correct word when the AI is using algorithms to keep customers addicted and happy. I suggest this is the very same problem that Cambridge Analytica, Zuckerberg, and others have been doing from the start. AI is just branding, and the same addictive algorithms are applied to the public in unregulated fashion. Only difference is the branding is selling a story that AI is actually some kind of entity with the expectation of fantastic competence.
I’m sorry to say, it’s incredibly competent at making fools of us.
I treat AI just as I treat gambling, I mostly avoid it completely, or if I have to use it, I do so with a lot of caution and skepticism.