• twinnie@feddit.uk
    link
    fedilink
    arrow-up
    15
    ·
    1 month ago

    Frankly I find the AI hate kind of tiresome itself. ChatGPT is just another source of information, it can be right and wrong, as can a webpage a books or a person. Nowadays all the people who think they’re smart just tell you to Google an answer yourself; in the early 00s people were the same about finding answers on the internet. If you had the answer to a question and told people you’d read it on the internet they’d smirk at you and tell you to read a book (which could also be wrong).

    • BremboTheFourth@piefed.ca
      link
      fedilink
      English
      arrow-up
      40
      ·
      1 month ago

      Ah yes all sources of information are equal, that’s why the bullshit I spew drunk in the bar at 3AM is just as valid as any well-supported, verifiable claim

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      1 month ago

      Personally I find the fact that people trust unreliable software to be annoying and a huge societal problem.

    • bdonvr@thelemmy.club
      link
      fedilink
      arrow-up
      31
      ·
      1 month ago

      another source of information

      The only one that didn’t have conscious thought put into the answer. One that can’t be updated/revised/held to account for being wrong. At the same time many expect it to be more right because it’s a computer and they are supposed to be infallible.

    • germanatlas@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      1 month ago

      GPT (or any other LLM) is not a source, it’s a relay that ambiguates its original sources and thus washes away any sort of credibility.

      • _stranger_@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        1 month ago

        This exactly. If it just said “Here’s sources with info about that, and a summary of what they say” that’s helpful. The whole presenting the info as authoritative is the crux of the problem. People are too stupid to *not" trust it.

        • germanatlas@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          Even those summaries with sources are to be used with caution, I’ve had plenty of search summaries where AI just omitted a ‘not’ or other vital parts of the original answer (tbf that’s also the case for man-made summaries, just look at the amount of accidental misinformation on Wikipedia caused by inattentive reading of original sources)

    • TotallynotJessica@lemmy.blahaj.zoneOPM
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      As other people have stated, it will fundamentally never be a “source of information.” Trust can be built in sources and their information can be verified, but since LLMs guess answers based on what it thinks sounds right, you’ll still need independent information to even know if it’s right. This makes it completely redundant. It doesn’t matter how powerful it becomes; it will never do things it is not invented to do.

      The real tragedy is that machine learning is a powerful technology, but people don’t know its limits and misuse the tech as a result.

    • belated_frog_pants@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Why would you want something answering you that is not deterministic? How is that useful for information gathering? If it spouts lies a decent amount of time: that’s completely useless information.