In science fiction I’ve often seen the term VI (Virtual Intelligence) to refer to machines that look intelligent, and could probably pass a Turing test, but aren’t really intelligent (normally VI coexists with actual AI, often used as interfaces, where it would be a waste, or too risky, to use a proper AI).
LLMs look a bit like that, though they’re probably too unreliable to use as an interface for anything important.
IMHO it’s real intelligence, but not artificial. LLMs have been fed virtually everything created by the actual intelligence: humans. Like I said, all they do is execute what is effectively pattern matching (on serious steroids) to distill what humans have created into something more bite-sized.
It’s pattern matching, but it’s not matching intelligently. An intelligence should be able to optimize itself to the task at hand, even before self-improvement. LLMs can’t select relevant data to operate on, nor can it handle executive functions.
LLMs are cool, and I think humans have something similar to process information with, but that’s just one part of a larger system, and it’s not the intelligent part.
To be fair to LLMs they get the text as a series of tokens, so when you type strawberry, the see 🍓 or something. What works better as a counterexample are variations of the river crossing puzzle changed to be trivial.
When discussing it, I often call it “simulated intelligence”, because at the end of the day that’s what neural networks are.
Edit: only to non-technical people, as simulations are a different thing.
In science fiction I’ve often seen the term VI (Virtual Intelligence) to refer to machines that look intelligent, and could probably pass a Turing test, but aren’t really intelligent (normally VI coexists with actual AI, often used as interfaces, where it would be a waste, or too risky, to use a proper AI).
LLMs look a bit like that, though they’re probably too unreliable to use as an interface for anything important.
IMHO it’s real intelligence, but not artificial. LLMs have been fed virtually everything created by the actual intelligence: humans. Like I said, all they do is execute what is effectively pattern matching (on serious steroids) to distill what humans have created into something more bite-sized.
It’s pattern matching, but it’s not matching intelligently. An intelligence should be able to optimize itself to the task at hand, even before self-improvement. LLMs can’t select relevant data to operate on, nor can it handle executive functions.
LLMs are cool, and I think humans have something similar to process information with, but that’s just one part of a larger system, and it’s not the intelligent part.
Ask an LLM how many 'r’s are in the word ‘strawberry’ and tell me it has any actual intelligence behind its output.
To be fair to LLMs they get the text as a series of tokens, so when you type strawberry, the see 🍓 or something. What works better as a counterexample are variations of the river crossing puzzle changed to be trivial.