Can we please stop treating AI “confessions” like they mean jack shit? It’s just giving genAI companies the self-seriousness they crave and making anti-AI people look like hypocritical morons.
It’s just a way for them to shift the blame for corporate negligence from either company onto an AI model.
Exactly. “Our busted ass untested software deleted our own database” doesn’t fill investors with confidence
And honestly the negligence was Railway hosting backuos on the same volume as production data.
I don’t know if that was Railway’s fault, but it was definitely this companies fault to use a company who followed that pattern.
Didn’t confess anything, only write the thing people statistically wanted to hear
Agreed and just as bad, if not worse, it didn’t learn anything from its mistake.
Could not learn anyway
We, as a society, need to stop pretending LLMs are conscious.
This is vectors between numbers. We humans ascribe value to it.
It is possible for vectors between numbers to be conscious - these just aren’t.
The Chinese Room isn’t real. John Searle pointed to a hard drive and said “processor.” The whole argument is Cartesian dualism, except instead of a soul, you need Steve to pay attention. If he gets the same answers while distracted then they don’t count.
It’s kinda funny where tech is going. We are going from programming the machines to do exactly what we want to saying what we want in natural language to some model hoping they are gonna do it right.
When technology become more magic then science.
Maybe he should have tried saying please. 😆
I’ve been saying it a lot lately, we finally built a computer that’s as unreliable as a human. I’m pretty sure that’s not a good thing.
The AI confession neither has internal monolog or access to the thinking tokens.
LLMs are incapable of introspection, they can’t playback their attention weights, or review them, or recall what they thought.
Even thinking tokens are just a reinforcement learning based loop to anneal the models thinking back to a solution. And again, Claude hides the thinking tokens so they don’t get used for model distillation.
This article, and all the articles like it, are pandering bullshit written by morons hoping to fool morons. It is a fiction written by the model to contrast <bad thing happened> to the chat log and system prompt.
Good day.
So correct me if I’m wrong, but the following happens for AI:
- Company gives guidelines and parameters for the project
- Company trains AI on whatever data
- No matter the data, AI still gives a general answer or summary.
- The answers are sometimes confidently incorrect
- The AI is uncontrollable because it considers the data general or loosely based guidelines
- There is no way to control the AI after a certain tipping point, because its learning is based in a fuzzy math way of thinking
What I don’t get is, even if the data wasn’t shitty like reddit’s info, would it still go off the rails? It sure seems like it.
AI was being a little stinker sir






