• AmbitiousProcess (they/them)@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    I’m not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

    The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

    If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it’s probably the flu and tells me to mask up for a bit, that’s probably gonna be better than that person being told “I’m sorry, I can’t answer that”

    At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

    I feel like I’d much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

    Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else’s accidental misinformation?

    But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

    I’m not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be “you can’t know”, and they’re not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.

    • felixwhynot@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Which to be fair is not any different from a lawyer. They’re not perfect either.

        The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?

        (Obviously, whoever is running it, but so far that hasn’t been established in court.)

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Itt thread: People with absolutely no fucking clue about what the consequences of their emotional response of “ai bad” will actually result in.