…training them to be skeptical and not blindly trust what comes out of the machine…
This is always what I don’t understand about using ai in it’s current form. If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place? Would it not be more efficient and easier to just use the couple of petaflops you have in your own head to solve the problem or write that email?
I think then, that it is more of a novelty that has yet to ware off for some people and is conisistently buoyed by the ceos that push it.
If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place?
Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?
“If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
The petaflops sometimes… flop. The only use case I personally have for llms, and they are brilliant at that, is when a word just won’t come to mind—I can give it a precise description of it but my brain refuses to produce the word, in English nor Spanish.
This is always what I don’t understand about using ai in it’s current form. If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place? Would it not be more efficient and easier to just use the couple of petaflops you have in your own head to solve the problem or write that email?
I think then, that it is more of a novelty that has yet to ware off for some people and is conisistently buoyed by the ceos that push it.
Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?
“If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
Does she put glue on your pizza?
Not yet…
When you cook well, you can eat the food.
When the bot says something, you always need to look up if it’s correct. That’s the ‘cook a new meal from scratch’ bit, not the ‘taste it’ bit.
You need to look things up every time, or do the taste test by asking if the bot’s answer ‘smells true’ (which is tempting, but a bad idea).
If you are using the bot just to perform things that you could easily look up, then yes, that is pointless.
“Food I don’t like” as an output isn’t really comparible to “information that is factually incorrect.”
It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
Its subjective vs objective. They’re not really comparable at all.
The objective reality of an AI hallucination being wrong is not what’s important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
It is very important. We’re just going to have to agree to disagree.
Well you certainly aren’t giving me any reason to agree… :/
It’s easier to copy
writeedit an email that to write it from scratch.Edit: I meant copyedit, not copywrite
Copywriting is writing from scratch, though specifically for marketing.
I wanted to say “copyedit”
The petaflops sometimes… flop. The only use case I personally have for llms, and they are brilliant at that, is when a word just won’t come to mind—I can give it a precise description of it but my brain refuses to produce the word, in English nor Spanish.