It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
Until they solve the AI hallucination problem, I’ll never be able to trust it.
It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.
I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.
Nobody says to blindly trust it…
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.
If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.