

No, they haven’t. They’re effectively prop masters. Someone wants a prop that looks a lot like a legal document, the LLM can generate something that is so convincing as a prop that it might even fool a real judge. Someone else wants a prop that looks like a computer program, it can generate something that might actually run, and one that will certainly look good on screen.
If the prop master requests a chat where it looks like the chatbot is gaining agency, it can fake that too. It has been trained on fiction like 2001: A Space Odyssey and Wargames. It can also generate a chat where it looks like a chatbot feels sorry for what it did. But, no matter what it’s doing, it’s basically saying “what would an answer to this look like in a way that might fool a human being”.







Yes, any journalist who uses that term should be relentlessly mocked. Along with terms like “Grok admitted” or “ChatGPT confessed” or especially any case where they’re “interviewing” the LLM.
These journalists are basically “interviewing” a magic 8-ball and pretending that it has thoughts.