If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place?
Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?
“If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?
“If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
Does she put glue on your pizza?
Not yet…
When you cook well, you can eat the food.
When the bot says something, you always need to look up if it’s correct. That’s the ‘cook a new meal from scratch’ bit, not the ‘taste it’ bit.
You need to look things up every time, or do the taste test by asking if the bot’s answer ‘smells true’ (which is tempting, but a bad idea).
If you are using the bot just to perform things that you could easily look up, then yes, that is pointless.
“Food I don’t like” as an output isn’t really comparible to “information that is factually incorrect.”
It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
Its subjective vs objective. They’re not really comparable at all.
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
It is very important. We’re just going to have to agree to disagree.
Well you certainly aren’t giving me any reason to agree… :/