I hope all the money thrown at this “AI” (misnomer, IMHO - it’s really just extremely overwrought pattern matching) causes at least some significant humbling (if not outright downfall) of some tech giants. I haven’t programmed in a couple decades, and yet even I could tell they weren’t gonna get to AGI offa this crap - I can’t believe how badly some of these supposed techies fell for their own hype.
In science fiction I’ve often seen the term VI (Virtual Intelligence) to refer to machines that look intelligent, and could probably pass a Turing test, but aren’t really intelligent (normally VI coexists with actual AI, often used as interfaces, where it would be a waste, or too risky, to use a proper AI).
LLMs look a bit like that, though they’re probably too unreliable to use as an interface for anything important.
IMHO it’s real intelligence, but not artificial. LLMs have been fed virtually everything created by the actual intelligence: humans. Like I said, all they do is execute what is effectively pattern matching (on serious steroids) to distill what humans have created into something more bite-sized.
It’s pattern matching, but it’s not matching intelligently. An intelligence should be able to optimize itself to the task at hand, even before self-improvement. LLMs can’t select relevant data to operate on, nor can it handle executive functions.
LLMs are cool, and I think humans have something similar to process information with, but that’s just one part of a larger system, and it’s not the intelligent part.
To be fair to LLMs they get the text as a series of tokens, so when you type strawberry, the see 🍓 or something. What works better as a counterexample are variations of the river crossing puzzle changed to be trivial.
Did you read the study? It’s hilarious. They’re using LLMs to “grade” the number of observed “skills” based on the output of LLMs. They’re using a stochastic parrot to evaluate another stochastic parrot, and concluding that there is some kind of emergent “skill” going on. Sheeeesh. It’d assume the authors of the paper are just having a laugh. But, one thing is for sure, the AI stupidity train keeps chugging along.
I hope all the money thrown at this “AI” (misnomer, IMHO - it’s really just extremely overwrought pattern matching) causes at least some significant humbling (if not outright downfall) of some tech giants. I haven’t programmed in a couple decades, and yet even I could tell they weren’t gonna get to AGI offa this crap - I can’t believe how badly some of these supposed techies fell for their own hype.
When discussing it, I often call it “simulated intelligence”, because at the end of the day that’s what neural networks are.
Edit: only to non-technical people, as simulations are a different thing.
In science fiction I’ve often seen the term VI (Virtual Intelligence) to refer to machines that look intelligent, and could probably pass a Turing test, but aren’t really intelligent (normally VI coexists with actual AI, often used as interfaces, where it would be a waste, or too risky, to use a proper AI).
LLMs look a bit like that, though they’re probably too unreliable to use as an interface for anything important.
IMHO it’s real intelligence, but not artificial. LLMs have been fed virtually everything created by the actual intelligence: humans. Like I said, all they do is execute what is effectively pattern matching (on serious steroids) to distill what humans have created into something more bite-sized.
It’s pattern matching, but it’s not matching intelligently. An intelligence should be able to optimize itself to the task at hand, even before self-improvement. LLMs can’t select relevant data to operate on, nor can it handle executive functions.
LLMs are cool, and I think humans have something similar to process information with, but that’s just one part of a larger system, and it’s not the intelligent part.
Ask an LLM how many 'r’s are in the word ‘strawberry’ and tell me it has any actual intelligence behind its output.
To be fair to LLMs they get the text as a series of tokens, so when you type strawberry, the see 🍓 or something. What works better as a counterexample are variations of the river crossing puzzle changed to be trivial.
the correct term is Stochastic Parrot… that is what LLM do. It sound even more cool that AI imho
No its not. They havent been this way for years
https://pli.princeton.edu/blog/2023/are-language-models-mere-stochastic-parrots-skillmix-test-says-no
There are several dozens of these studies
Doesn’t matter. There is no cognition. Just word salads mixed and matched with no possibility of receiving “I don’t know” for a answer.
That’s was a remarkably uninsightful way to approach that topic. Please link to more of these “studies”, that one was way too short.
The virgin cited study vs the Chad Ad Hominem
Did you read the study? It’s hilarious. They’re using LLMs to “grade” the number of observed “skills” based on the output of LLMs. They’re using a stochastic parrot to evaluate another stochastic parrot, and concluding that there is some kind of emergent “skill” going on. Sheeeesh. It’d assume the authors of the paper are just having a laugh. But, one thing is for sure, the AI stupidity train keeps chugging along.
So they no more use probability to choose next word? I wonder how they do it now
Still stochastic. Even now they still can’t reliably do repeat tasks