My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn’t figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.
When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I’m not going to sacrifice my health so that somebody else can keep their job. There’s a lot of other things that I would sacrifice, but not my health.
That’s because the medical one (particularly good at spotting cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.
When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they’re making decisions that appear intelligent, they’re AI.
One example of an expert system “AI” is called “game AI.” If a bot in a game appears to be acting similar to a real human, that’s considered AI. Or at least it was when I went to college.
AI is kind of like Scotsmen. It’s hard to find a true one, and every time you think you have, the goalposts get moved.
Now, AI is hard, both to make and to define. As for what is sometimes called AGI (artificial general intelligence), I don’t think we’ve come close at this point.
I see the no true Scotsman fallacy as something that doesn’t affect technical experts, for the most part. Like, an anthropologist would probably go with the simplest definition of birthplace, or perhaps go as far to use heritage. But they wouldn’t get stuck on the complicated reasoning in the fallacy.
Similarly, for AI experts, AI is not hard to find. We’ve had AI of one sort or another since the 1950s, I think. You might have it in some of your home appliances.
When talking about human level intelligence from an inanimate object, the history is much longer. Thousands of years. To me, it’s more a question for philosophers than for engineers. The same questions we’re asking about AI, philosophers have asked about humans. And just about every time people say modern AI is lacking in some trait compared to humans, you can find a history of philosophers asking whether humans really exhibit that trait in the first place.
I guess neuroscience is also looking into this question. But the point is, once they can explain exactly why human minds are special, we engineers won’t get stuck on the Scotsman fallacy, because we’ll be too busy copying that behavior into a computer. And then the non-experts will get to have fun inventing another reason that human intelligence is special.
Because that’s the real truth behind Scotsman, isn’t it? The person has already decided on the answer, and will never admit defeat.
My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn’t figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.
When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I’m not going to sacrifice my health so that somebody else can keep their job. There’s a lot of other things that I would sacrifice, but not my health.
That’s because the medical one (particularly good at spotting cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.
LLMs are not AI
They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.
When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they’re making decisions that appear intelligent, they’re AI.
One example of an expert system “AI” is called “game AI.” If a bot in a game appears to be acting similar to a real human, that’s considered AI. Or at least it was when I went to college.
AI is kind of like Scotsmen. It’s hard to find a true one, and every time you think you have, the goalposts get moved.
Now, AI is hard, both to make and to define. As for what is sometimes called AGI (artificial general intelligence), I don’t think we’ve come close at this point.
I see the no true Scotsman fallacy as something that doesn’t affect technical experts, for the most part. Like, an anthropologist would probably go with the simplest definition of birthplace, or perhaps go as far to use heritage. But they wouldn’t get stuck on the complicated reasoning in the fallacy.
Similarly, for AI experts, AI is not hard to find. We’ve had AI of one sort or another since the 1950s, I think. You might have it in some of your home appliances.
When talking about human level intelligence from an inanimate object, the history is much longer. Thousands of years. To me, it’s more a question for philosophers than for engineers. The same questions we’re asking about AI, philosophers have asked about humans. And just about every time people say modern AI is lacking in some trait compared to humans, you can find a history of philosophers asking whether humans really exhibit that trait in the first place.
I guess neuroscience is also looking into this question. But the point is, once they can explain exactly why human minds are special, we engineers won’t get stuck on the Scotsman fallacy, because we’ll be too busy copying that behavior into a computer. And then the non-experts will get to have fun inventing another reason that human intelligence is special.
Because that’s the real truth behind Scotsman, isn’t it? The person has already decided on the answer, and will never admit defeat.