If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.
So the AI hallucinates because it loses context. Hooked up to quantum computers you won’t have that happening. So regular people think the thing is stupid while the government has a murder AI.
Why do you concur? You have a problem with “hallucinations” because it’s something humans do. This commentor wants to call them (among other things) “lies”, which implies intent and knowledge of falsehood which an LLM definitely can’t have. I’m not saying “halliconations” are super accurate but I don’t think the term is too positive and lessens the major issues LLMs have.
ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)
In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.
In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.
Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.
The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.
Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.
It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).
While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.
At what point does deterministic descend into random?
Assumed Intelligence is a solution for a bunch of multivariate problems, like say “the travelling salesman”, but it’s not intelligence nor in my opinion is it effectively “deterministic”.
While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.
Fair enough. There’s a significant difference in complexity between the surface implication of what I said versus reality. Yes, it’s deterministic, but it’s also complex enough that something more should be said… though, we need to be careful here. Our language is not mature enough to scaffold the precise concepts we need here, and attempting to do so regardless carries the risk of smuggling in many concepts we did not intend to smuggle in. Concepts like intent, for example. I agree with you, but cautiously.
At what point does deterministic descend into random?
It shouldn’t at any point. Instead, we’re discussing a system that’s similar to the double pendulum or three body problem. It’s deterministic, though computationally irreducible. That’s chaotic, but it is not random. It’s extremely sensitive to initial conditions.
What are you saying precisely? It’s well known that LLMs have non-deterministic output (Ilya Sutskever even claims as such). Are you saying the way it goes about retrieving tokens as deterministic?
I think you’re right about that, but it is artificial nondeterminism in the sense that it’s relying on several algorithmic factors and, more subtly, device differences. The system itself is a complex yet deterministic function.
I can agree with that largely but I still contend you’re conflating a few things to make that argument. Fundamentally an LLM will make predictions based on probability (ignoring temperature) and probability does not equal certainty.
I would argue that’s empirically true but not fundamentally true. Actually, I’d argue that my point is the fundamental truth here. Computers still cannot generate random output. They simulate the process, and it’s not truly random. It’s just good enough to fool us at the surface level.
The Assumed Intelligence systems I’m familiar with have a “random” element, but it’s unclear where that source of randomness comes from. Is it using a computational random source, or something like the lava lamp wall at Cloudflare, which is significantly more random, potentially actually random.
If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.
Edit: Guess I used the wrong words, oh well
The hallucinations come from the llm losing context. The window is only so large.
“Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.
They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.
They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.
It’s like Google with their “side loading”. There is no such thing, it’s installing an app…
It’s a word war. People are being manipulated.
Lies require intent.
So the AI hallucinates because it loses context. Hooked up to quantum computers you won’t have that happening. So regular people think the thing is stupid while the government has a murder AI.
Been going on for a while. Remember “Alternative Facts”?
I concur.
Why do you concur? You have a problem with “hallucinations” because it’s something humans do. This commentor wants to call them (among other things) “lies”, which implies intent and knowledge of falsehood which an LLM definitely can’t have. I’m not saying “halliconations” are super accurate but I don’t think the term is too positive and lessens the major issues LLMs have.
ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)
In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.
In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.
fair enough
Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.
Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.
The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.
Exactly. This is also why Ai doesn’t really truly understand the responses it gives back.
It’s faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).
LLMs don’t try anything. They are deterministic tools.
While I understand your point, deterministic with a billion variables is beyond human ability to process, let alone the multi-billion parameter models in general circulation today.
At what point does deterministic descend into random?
Assumed Intelligence is a solution for a bunch of multivariate problems, like say “the travelling salesman”, but it’s not intelligence nor in my opinion is it effectively “deterministic”.
Fair enough. There’s a significant difference in complexity between the surface implication of what I said versus reality. Yes, it’s deterministic, but it’s also complex enough that something more should be said… though, we need to be careful here. Our language is not mature enough to scaffold the precise concepts we need here, and attempting to do so regardless carries the risk of smuggling in many concepts we did not intend to smuggle in. Concepts like intent, for example. I agree with you, but cautiously.
It shouldn’t at any point. Instead, we’re discussing a system that’s similar to the double pendulum or three body problem. It’s deterministic, though computationally irreducible. That’s chaotic, but it is not random. It’s extremely sensitive to initial conditions.
What are you saying precisely? It’s well known that LLMs have non-deterministic output (Ilya Sutskever even claims as such). Are you saying the way it goes about retrieving tokens as deterministic?
I think you’re right about that, but it is artificial nondeterminism in the sense that it’s relying on several algorithmic factors and, more subtly, device differences. The system itself is a complex yet deterministic function.
I can agree with that largely but I still contend you’re conflating a few things to make that argument. Fundamentally an LLM will make predictions based on probability (ignoring temperature) and probability does not equal certainty.
I would argue that’s empirically true but not fundamentally true. Actually, I’d argue that my point is the fundamental truth here. Computers still cannot generate random output. They simulate the process, and it’s not truly random. It’s just good enough to fool us at the surface level.
They are deterministic but complex to determine.
The Assumed Intelligence systems I’m familiar with have a “random” element, but it’s unclear where that source of randomness comes from. Is it using a computational random source, or something like the lava lamp wall at Cloudflare, which is significantly more random, potentially actually random.
It’s temperature primarily. That being said there is still a chance that an LLM can output values that are unexpected even at low temperatures.