• Joe@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 hours ago

    Sure… copy & paste is copy & paste.

    However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.

    I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

    I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

    Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.

    • dustycups@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      …fact check claims

      Risky use-case. Besides, why bother when you have to fact check the fact checker.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 hours ago

        It is about respecting everyone’s time…

        Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.

        AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.

        We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).

        • Truscape@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 hours ago

          I believe it’s just complexity and token/compute usage.

          You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

          It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

        • magnetosphere@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          2 hours ago

          I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 hour ago

        That doesn’t seem like a solvable thingy.
        People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

        • magnetosphere@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

          If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.