The phrase “artificial intelligence” was coined in 1956 by John McCarthy during a workshop at Dartmouth College, where researchers aimed to explore whether machines could think like humans.
When most people hear AI they think AGI and because a narrow-AI language model doesn’t perform the way they expect an AGI to they then say stuff like “it’s not intelligent” or “it’s not an AI”
AI as a term is about as broad as the term “plants” which contains everything from grass to giant redwoods. LLM is just a subcategory like conifers.
I work primarily in “classical” AI and have been working with it on-and-off for just under 30 years now. Programmed my first GAs and ANNs in the 90s. I survived Prolog. I’ve had prolonged battles getting entire corporate departments to use the terms “Machine Learning” and “Artificial Intelligence” correctly, understand what they mean, and how to start thinking about them to incorporate them correctly into their work.
Thus why I chose the word “LLM” in my response, not “AI”.
I will admit that I assumed that by “AI” Jimmy Carr was referring to LLMs, as that’s what most people mean these days. I read the TL;DW by @masterspace@lemmy.ca but didn’t watch the original content. If I’m wrong in that assumption and he’s referring to classical AI, not LLMs, I’ll edit my original post.
It’s not entirely clear what he’s referring to, he just uses the term AI broadly in the context of people being worried about job losses, then talks about the reduction in secret police costs that enables, then discusses applying AI to physics.
AI is not synonymous with LLM. AlphaFold figured out protein folding. It’s an AI but not an LLM.
100% this, people say they understand AI is a buzzword, but don’t realize just how large of an umbrella that term actually is.
Enemy NPCs in video games back to the 80’s fall under AI.
The term AI is actually from the 1950s
Indeed, but I can’t use video games as an example in that time period
Early AI 😄
lmao, you got me there!
When most people hear AI they think AGI and because a narrow-AI language model doesn’t perform the way they expect an AGI to they then say stuff like “it’s not intelligent” or “it’s not an AI”
AI as a term is about as broad as the term “plants” which contains everything from grass to giant redwoods. LLM is just a subcategory like conifers.
Exactly. Or, to be more precise to the point of the comment that started this thread:
Physics is to Chemistry what AI is to LLMs
Autocorrrect and grammar suggestions are AI.
Steak sauce is A1.
I work primarily in “classical” AI and have been working with it on-and-off for just under 30 years now. Programmed my first GAs and ANNs in the 90s. I survived Prolog. I’ve had prolonged battles getting entire corporate departments to use the terms “Machine Learning” and “Artificial Intelligence” correctly, understand what they mean, and how to start thinking about them to incorporate them correctly into their work.
Thus why I chose the word “LLM” in my response, not “AI”.
I will admit that I assumed that by “AI” Jimmy Carr was referring to LLMs, as that’s what most people mean these days. I read the TL;DW by @masterspace@lemmy.ca but didn’t watch the original content. If I’m wrong in that assumption and he’s referring to classical AI, not LLMs, I’ll edit my original post.
It’s not entirely clear what he’s referring to, he just uses the term AI broadly in the context of people being worried about job losses, then talks about the reduction in secret police costs that enables, then discusses applying AI to physics.