Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/
Personally I think this is a good response. I hope they stay true to it in the future.
Ars Technica is sorry: https://www.youtube.com/watch?v=9u0EL_u4nvw
I read them regularly for years until they started banning folk in the forums for pointing out how problematic it is for Eric Berger to still be slobbering on Elon’s knob.
Don’t think I’m missing much, though I do miss Beth Mole.
Woah, they take the blame and apologize. This is not often seen and commands respect.
Link to the archived version of the article in question.
I actually like the editor’s note. Instead of naming-and-shaming the author (Benj Edwards), it’s blaming “Ars Technica”. It also claims they looked for further issues. It sounds surprisingly sincere for corporate apology.
Blaming AT as a whole is important because it acknowledges Edwards wasn’t the only one fucking it up. Whatever a journalist submits needs to be reviewed by at least a second person, exactly for this reason: to catch up dumb mistakes. Either this system is not in place or not working properly.
I do think Edwards is to blame but I wouldn’t go so far as saying he should be fired, unless he has a backstory of doing this sort of dumb shit. (AFAIK he doesn’t.) “People should be responsible for their tool usage” is not the same as “every infraction deserves capital punishment”; sometimes scolding is enough. I think @totally_human_emdash_user@piefed.blahaj.zone’s comment was spot on in this regard: he should’ve taken sick time off, but this would have cost him vacation time, and even being forced to make this choice is a systemic problem. So ultimately it falls on his employer (AT) again.
I agree with you. For better or worse, I have to imagine a lot of people who’s job relies on pumping out regular articles use LLMs to get the ball rolling. Which is what appears to have happened here.
Just to be clear, the article itself was written by him; he was just experimenting with an AI tool to extract quotes (because learning about AI tools is literally his job), and because he had COVID at the time he got mixed up and pasted paraphrased quotes rather than original quotes. (Arguably he should not have been experimenting with a new tool while sick, but I am willing to cut him some slack because he was probably not thinking clearly at the time.)
The serious thing here is actually not so much that he used an AI tool at some point in the process but that fabricated quotes ended up in a published article.
Thanks for clarifying, after I left that comment I realized I had the order of events reversed!
Benj Edwards, the author responsible, has posted his side.
Thanks for sharing, I was wondering if he would say anything about it. They seem to be handling it well.
This is a good way to handle the situation and an understandable and believable scenario, so I’m perfectly willing to forgive this. I’m a little less okay with an apparent “work in spite of illness” policy, however.
But still, it’s a serious blunder, and it needs to be said that any repeat of this at all would be very damning. I can’t forgive this level of fuckup twice. Any AI use is a risk, folks; treat it like one.
When I first became aware of it, I did not expect this story to become a good case for worker’s rights and ensuring everyone has enough rest but here we are.
Why would he play with an AI toy while he’s doing his job and he’s sick?
Of course something was bound to happen.
You can’t empathize with someone having to work while sick and wanting to use a tool to make that work slightly easier?
I tend to empathize with the victims of plagiarism over the perpetrators of it.
That’s an incredibly narrow minded way to view this issue.
Well, I can see how that could happen, and in fact, copy-paste artifacts and unintended summaries/hallucinations have happened to me when grabbing output back from an LLM.
Here’s the thing though: I catch it 100% of the time because my writing has version control and I compare diffs. When dealing with something that can exist as plain text, there isn’t a good reason not to have that setup. I’m no journalist, but it blows my mind that writers who deal specifically in reported facts apparently don’t have systems in place to idiot-proof and preserve their sources of truth.
I get it, at some point back in the analog days there were more editors and copywriters that actually verified these things, and those jobs were sacrificed at the altar of capitalism. I’ve seen writing quality on the web take a downturn as a result. But for fuck’s sake y’all, maybe do the bare minimum and start implementing safeguards before you let your writers use inherently lossy tools?
Thank fuck Bsky has these character limits or else he would have had to put all that text in an easily legible format for reading and copying. Fuck character limits up their stupid asses.
The author added the entire text in the alt text if you click on the image and then the
to see the full thing. Can easily copy and paste from that or read it there insteadAll the more stupid. Why is it hidden in the alt text and not in the text of the post?
Not using an ActivityPub based platform has it’s drawbacks I guess
Some bluesky clients/instances support longer posts.
This sounds eerily familiar…
I don’t know if Hearst told him to use a chatbot to generate their “Best of Summer Lists,” but it doesn’t matter. When you give a freelancer an assignment to turn around ten summer lists on a short timescale, everyone understands that his job isn’t to write those lists, it’s to supervise a chatbot.
But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive process). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).
https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.
Nothing about who put it in there or what you’re doing to them?
We are reinforcing our editorial standards following this incident.
It sounds like they will be reminding their team not to do that and scrutinizing articles in the near future
Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.
So you’re calling for someone to be fired without actually reading the article or understanding the situation? What punishment do you deserve for your laziness?
I did read the article. What punishment do you deserve for assuming I did not?
You said that the author 100% completely outsourced his job to AI, which is such an absurd exaggeration of the truth that assuming you didn’t read the article was me being generous. Apparently you read it then just lied. Sorry for the confusion on my part, I assumed that you were just an idiot instead of a malicious idiot.
The article says nothing to contradict my statement. The only thing that did was the author’s statement themselves, which, if you just took their word for it without asking yourself any questions then you’re the idiot.
Calm down, that’s not what happened
@GammaGames this.
The editors are ultimately responsible for this, and they are owning up, @artyom. This is how it should be. There will probably be internal consequences to whoever wrote the piece and included the slop-quotes, but the buck stops with the editors as far as anyone outside is concerned.
Otherwise editors could just blame every slip-up and failure on the intern (whoever the intern is that week) and publicly fire them, having a nice public execution instead of real accountability.
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.
this would have cost him vacation time
After all these years my poor European brain is still struggling to understand this.
It helps if you think of America as believing that when an individual gets sick it is their own fault.
If only this could have been prevented by, I don’t know, not experimenting.
It is literally his job to be familiar with this technology, which he cannot do if he does not experiment with it.
Having said that, doing this experiment while sick was probably ill-advised, so in that sense I agree with you, but in fairness he probably was not thinking clearly while he was sick.
Surely he picked the very worst thing to experiment with, and at the worst time… And that’s the very best case scenario
Yes… hence the “ill-advised” part.
while sick …
ill-advised
I see what you did there.
he had COVID while doing all this
I’ve had COVID before, it sucks but it doesn’t make you stupid.
he just got mixed up when experimenting
I don’t believe him.
Why don’t you believe him?
Because it’s completely ridiculous. What if we was just phoning it in? He’s just going to come out and say it?
So in other words, you are just making an assumption.
I don’t believe him.
I know the internet is full of untrustworthy charlatans, so I can’t blame you, but I’m as anti-AI as they come and I do believe him. Mistakes happen, especially in the context of rushed work done while sick. Remember that a lie by a grifter and the truth from an innocent sometimes look exactly the same; effective lies are made off of what was once a truth, after all.
Removed by mod
There are plenty of people who are already piling on him. Is it really so bad that some of us feel the need to counter some of what is being said?
deleted by creator
It is 23.43, and I can’t analyze this tonight. Ars has been good for a long while, and I enjoy their reporting. To have to reassess this is disappointing, but I’ve already had to feel this with the NYT and WaPo. Not exactly a huge loss here. But I want to fully investigate what happened ahead of reaching a conclusion.
Rest assured, I will reach a conclusion. I don’t think I’ll like the one I think I’ll find, but that’s journalism for you. I will withhold judgment until I’ve had a chance to fully examine what happened here.
Why write this comment when you don’t have anything to say? I’m puzzled why I should care that you did not analyse this yet.
There is no reason for you to care. I am informing users familiar with my writing and methods that this is now on my radar, but I can’t yet do it justice. I’m being honest about not being ready to perform analysis.
I don’t see anything new? It’s a response, I was hoping they’d actually say what happened instead of… just repeating that it did.
The author explained
i dont think any journalist, especially a technology journalist, should be using AI summaries in any capacity as part of their research. i’m glad he owned up to it but this should be a career damning event.
Yeah, you should lose your entire career for one mistake! Fucking up an article on the internet, can you imagine a worse sin???
He caused a massive shitshow, it’s already a mark on his career.
He was using the AI to help him extract quotes; it makes sense that, as the head AI writer, he would be experimenting with new AI tools to become more familiar with them. It sounds like he just got confused at some point at mixed up a paraphrasing of the AI tool with an original quote, and it sounds like he was not in the best shape when writing this article so he made a dumb mistake. Regardless, arguably the use of AI here is a red herring, because if he had double-checked his quotes and fixed them then it would not have mattered whether he used AI or not.
I think that damning his entire career over this would be too extreme. I suspect that his will be a very good learning experience for him, but that unfortunately he will need to apply his lessons at another place of employment.
Good response, and it sounds like lessons learned!
Agreed, why is why I really do not like how much people are beating on him, but the problem remains that he published an article with fabricated quotes, which hurts not only his own credibility but that of Ars as a whole. I think that it may be best for everyone if he applies the lessons that he learned at another place of employment.
(Also, though, Ars really needs to do something about its culture regarding working while sick, as that makes it inevitable that a mistake like this is going to be made, AI or not.)
I think beating him while he’s down is too much. Mean comments on the internet do not compare to having to find a new job while recovering from covid
Fair enough. Realistically, my understanding is that he and the other authors are part of WGA, so Ars would be required to go through an investigative process before firing him, which would probably take enough time that he would have had plenty of time to recover from COVID before having to hunt for a job.
Having said that, I am out for change, not for blood. I think that if Ars announced that the root problem was the lack of sick leave so it was a systemic failure rather than a personal failure (or something along those lines), then that might actually be a pretty good outcome as well.
That would be a good outcome!
The question is, how many other articles with fake quotes are there on Ars? And not just Ars, but across the mainstream media
I think, all things considered, they handled this pretty well, and I’m actually more likely now to read an Ars article than before the article (when I had a neutral opinion).















