It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
“Food I don’t like” as an output isn’t really comparible to “information that is factually incorrect.”
It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
Its subjective vs objective. They’re not really comparable at all.
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
It is very important. We’re just going to have to agree to disagree.
Well you certainly aren’t giving me any reason to agree… :/