This is a New York Times article. By default, the New York Times is the citation, just like every other MSM. And even then, this specific article does attribute it:
To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers. Some of these people spoke with the company’s approval, and have been working to make ChatGPT safer. Others spoke on the condition of anonymity because they feared losing their jobs.
Claude is trying to lick my ass clean every time I ask it a simple question
The article only said they made a test, not that they weren’t failing it, which happens to be what the linked paper says. This is not new as LLMs also always failed a certain intelligence test devised around that same time period until ~2024.
As soon as they found experts who were willing to say something else than “don’t make a chatbot”.
This is a New York Times article. By default, the New York Times is the citation, just like every other MSM. And even then, this specific article does attribute it:
The article only said they made a test, not that they weren’t failing it, which happens to be what the linked paper says. This is not new as LLMs also always failed a certain intelligence test devised around that same time period until ~2024.
That’s 55%: https://humanfactors.jmir.org/2025/1/e71065