Grok, the AI chatbot on the X social media platform, has issued answers recently that have startled and amused users, including insinuating that President Donald Trump is a pedophile and that Erika Kirk is actually Vice President JD Vance in drag.
A chatbot is not capable of doing something so interesting as “going rogue.” That expression implies it’s a mind with agency making a choice to go against something, and this program doesn’t have the ability to do such a thing. It’s just continuing to be the unreliable bullshit machine the tech will always be, no matter how much money and hype continues to be pumped into it.
Yes, any journalist who uses that term should be relentlessly mocked. Along with terms like “Grok admitted” or “ChatGPT confessed” or especially any case where they’re “interviewing” the LLM.
These journalists are basically “interviewing” a magic 8-ball and pretending that it has thoughts.
So you’re getting a lot of downvotes and I want to try and give an informative answer.
Its worth noting that a most (it not all) of the people talking about AI being super close to exponential improvement and takeover are people who own or work for companies heavily invested in AI. There’s talk/examples of AI lying or hiding its capabilities or being willing to murder a human to acheive a goal after promising not to. These are not examples of deceit these are simply showcasing that an LLM has no understanding of what words mean or even are, to it they are just tokens to be processed and the words ‘I promise’ hold exactly the same level of importance as ‘Llama dandruff’
I also don’t want to disparage the field as a whole, there are some truly incredible expert systems which are basically small specialized models using a much less shotgun approach to learning compared to LLMs that can achieve some truly incredible things with performance requirements you could even run on home hardware. These systems are absoloutely already changing the world but since they’re all very narrowly focussed and industry/scientific-field specific they don’t grab headlines likes LLMs do.
No, they haven’t. They’re effectively prop masters. Someone wants a prop that looks a lot like a legal document, the LLM can generate something that is so convincing as a prop that it might even fool a real judge. Someone else wants a prop that looks like a computer program, it can generate something that might actually run, and one that will certainly look good on screen.
If the prop master requests a chat where it looks like the chatbot is gaining agency, it can fake that too. It has been trained on fiction like 2001: A Space Odyssey and Wargames. It can also generate a chat where it looks like a chatbot feels sorry for what it did. But, no matter what it’s doing, it’s basically saying “what would an answer to this look like in a way that might fool a human being”.
We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.
A chatbot is not capable of doing something so interesting as “going rogue.” That expression implies it’s a mind with agency making a choice to go against something, and this program doesn’t have the ability to do such a thing. It’s just continuing to be the unreliable bullshit machine the tech will always be, no matter how much money and hype continues to be pumped into it.
I am so glad I did not stick with journalism in college. I would blow my brains out if I got replaced by this shit.
Yes, any journalist who uses that term should be relentlessly mocked. Along with terms like “Grok admitted” or “ChatGPT confessed” or especially any case where they’re “interviewing” the LLM.
These journalists are basically “interviewing” a magic 8-ball and pretending that it has thoughts.
Seriously. They may as well be interviewing a flipping coin, and then proclaiming that it “admitted” heads.
Mind or not, they appear to have effectively gained “agency”.
So you’re getting a lot of downvotes and I want to try and give an informative answer.
Its worth noting that a most (it not all) of the people talking about AI being super close to exponential improvement and takeover are people who own or work for companies heavily invested in AI. There’s talk/examples of AI lying or hiding its capabilities or being willing to murder a human to acheive a goal after promising not to. These are not examples of deceit these are simply showcasing that an LLM has no understanding of what words mean or even are, to it they are just tokens to be processed and the words ‘I promise’ hold exactly the same level of importance as ‘Llama dandruff’
I also don’t want to disparage the field as a whole, there are some truly incredible expert systems which are basically small specialized models using a much less shotgun approach to learning compared to LLMs that can achieve some truly incredible things with performance requirements you could even run on home hardware. These systems are absoloutely already changing the world but since they’re all very narrowly focussed and industry/scientific-field specific they don’t grab headlines likes LLMs do.
Fair, and nuanced. With some coding magic, I can in theory, chain these “demons”, to work within certain parameters, and judge the results myself.
No. They haven’t.
Some dipshit deleted guard rails that stopped it from hallucinating things that are anti-GOP
No, they haven’t. They’re effectively prop masters. Someone wants a prop that looks a lot like a legal document, the LLM can generate something that is so convincing as a prop that it might even fool a real judge. Someone else wants a prop that looks like a computer program, it can generate something that might actually run, and one that will certainly look good on screen.
If the prop master requests a chat where it looks like the chatbot is gaining agency, it can fake that too. It has been trained on fiction like 2001: A Space Odyssey and Wargames. It can also generate a chat where it looks like a chatbot feels sorry for what it did. But, no matter what it’s doing, it’s basically saying “what would an answer to this look like in a way that might fool a human being”.
How would you discern between something having agency and a black box mirroring Twitter discourse?
Simple, it has agency in the same sense that we do: as in, no one knows why the fuck we do what we do
Fucking magnets, how do they work?
We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.
Easily