@nonentity@technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).
It’s telling that you assumed the description applied exclusively to LLMs.
No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.
THANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.
What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.
I’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.
It’s kind of like how we’ve lost the war on hacking.
Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.
Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.
@nonentity @technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).
It’s telling that you assumed the description applied exclusively to LLMs.
No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.
THANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.
What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.
LLMs are still a facet of AI though. It sounds like they’re saying it shouldn’t be categorized as AI at all.
I’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.
It’s kind of like how we’ve lost the war on hacking.
Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.
Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.