Frankly I find the AI hate kind of tiresome itself. ChatGPT is just another source of information, it can be right and wrong, as can a webpage a books or a person. Nowadays all the people who think they’re smart just tell you to Google an answer yourself; in the early 00s people were the same about finding answers on the internet. If you had the answer to a question and told people you’d read it on the internet they’d smirk at you and tell you to read a book (which could also be wrong).
Why would you want something answering you that is not deterministic? How is that useful for information gathering? If it spouts lies a decent amount of time: that’s completely useless information.
As other people have stated, it will fundamentally never be a “source of information.” Trust can be built in sources and their information can be verified, but since LLMs guess answers based on what it thinks sounds right, you’ll still need independent information to even know if it’s right. This makes it completely redundant. It doesn’t matter how powerful it becomes; it will never do things it is not invented to do.
The real tragedy is that machine learning is a powerful technology, but people don’t know its limits and misuse the tech as a result.
This exactly. If it just said “Here’s sources with info about that, and a summary of what they say” that’s helpful. The whole presenting the info as authoritative is the crux of the problem. People are too stupid to *not" trust it.
Even those summaries with sources are to be used with caution, I’ve had plenty of search summaries where AI just omitted a ‘not’ or other vital parts of the original answer (tbf that’s also the case for man-made summaries, just look at the amount of accidental misinformation on Wikipedia caused by inattentive reading of original sources)
Ah yes all sources of information are equal, that’s why the bullshit I spew drunk in the bar at 3AM is just as valid as any well-supported, verifiable claim
The only one that didn’t have conscious thought put into the answer. One that can’t be updated/revised/held to account for being wrong. At the same time many expect it to be more right because it’s a computer and they are supposed to be infallible.
Frankly I find the AI hate kind of tiresome itself. ChatGPT is just another source of information, it can be right and wrong, as can a webpage a books or a person. Nowadays all the people who think they’re smart just tell you to Google an answer yourself; in the early 00s people were the same about finding answers on the internet. If you had the answer to a question and told people you’d read it on the internet they’d smirk at you and tell you to read a book (which could also be wrong).
Why would you want something answering you that is not deterministic? How is that useful for information gathering? If it spouts lies a decent amount of time: that’s completely useless information.
As other people have stated, it will fundamentally never be a “source of information.” Trust can be built in sources and their information can be verified, but since LLMs guess answers based on what it thinks sounds right, you’ll still need independent information to even know if it’s right. This makes it completely redundant. It doesn’t matter how powerful it becomes; it will never do things it is not invented to do.
The real tragedy is that machine learning is a powerful technology, but people don’t know its limits and misuse the tech as a result.
GPT (or any other LLM) is not a source, it’s a relay that disambiguates its original sources and thus washes away any sort of credibility.
This exactly. If it just said “Here’s sources with info about that, and a summary of what they say” that’s helpful. The whole presenting the info as authoritative is the crux of the problem. People are too stupid to *not" trust it.
Even those summaries with sources are to be used with caution, I’ve had plenty of search summaries where AI just omitted a ‘not’ or other vital parts of the original answer (tbf that’s also the case for man-made summaries, just look at the amount of accidental misinformation on Wikipedia caused by inattentive reading of original sources)
Ah yes all sources of information are equal, that’s why the bullshit I spew drunk in the bar at 3AM is just as valid as any well-supported, verifiable claim
Personally I find the fact that people trust unreliable software to be annoying and a huge societal problem.
I find AI hate tiresome, but relying on an AI to be an authoritative source is exactly why so many people are hating it. Please stop.
The only one that didn’t have conscious thought put into the answer. One that can’t be updated/revised/held to account for being wrong. At the same time many expect it to be more right because it’s a computer and they are supposed to be infallible.