• JoshuaFalken@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 hours ago

    I could see the argument for things that aren’t particularly important, but to continue with the legal example, it seems akin to asking a practicing lawyer a question and asking someone that watched Boston Legal when it aired and can quote James Spader.

    Unfortunately, with the potential for a hallucinatory response, anything beyond quite simplistic queries shouldn’t be relied on with more weight than a crutch of toothpicks.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      I don’t think you are wrong, but again, thats not the case.

      You’re making an argument about speech here.

      Lets say you make a fan website based entirely on fine tuned LLM which acts and responds as James Spader from Boston legal. Are you liable if a user of that website construes that speech as legal advice?

      If you are willing to give up access to speech so easily, I have almost no hope for Americans in the near future.

      What laws like this do is create an incredibly high pass filter to in positions of established power. Its literally suicidal in regards to freedom of speech on the internet.

      The right answer is that if you are dumb enough to have gotten your legal advice from an AI hallucination of James Spader, you get to absorb those consequences. The wrong answer is to tell people they aren’t allowed to build fan websites of James Spader giving questionable legal advice.

      • deliriousdreams@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        In your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.

        Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

        There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).

        We also know that the vast majority of people who use chatbots do not get the sources they come from.

        So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.

        The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

        No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

          Yes. And neither are LLMs or their derivatives.

          The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

          And yet people do, and we accept that as a necessary consequence of maintaining free speech as a principal.

          The exact arguments being accepted in this thread are the same which led directly to crackdowns in Hungary, China, and Russia.

          If you are okay with limiting and regulating LLMs as a form of speech, I promise it’s your speech which will end up limited, and a very small number of companies will control all speech on the internet. You should stop.

          • deliriousdreams@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            42 minutes ago

            Who’s speach is being limited by limiting LLM’S? Because as a legal entity their speech cannot be infringed because the LLM doesn’t have basic rights in the way that a human does.

            So what you’re saying is that you don’t want these companies to be held to any legal standard for the information they output (which is different from reddit because the companies can’t be held responsible in the US under section 230 for what their users write).

            The chatbot is the output of the company’s data set and somehow you’re saying the company can’t be held responsible for what that output is and if it’s dangerous because it’s curtailing free speech?

            That’s such an interesting take.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              24 minutes ago

              I’m gaming out the realistic consequences of what a law will mean. It has nothing to do whatsoever if you approve if these companies or not to try and understand the consequences of what will happen if a law like this passes. You don’t get to pick or choose if the speech is from an LLM or a company that gets limited or from an individual. There is no difference from a legal perspective.

              And this law and approach to limiting speech to “protect people” from the stupid consequences of their own action, they aren’t new. And we already know the consequences. Large corporate entities will just get around them or pay an inconsequential fine, and individuals will have their rights curtailed as a result

              The entire thread here is falling for an incredibly obvious astroturfing campaign because they associate LLMs with big bad corporations and the real consequences these bad companies have wreaked. But limiting free speech on the internet won’t stop them, what it will stop is our ability to communicate and resist them.

      • JoshuaFalken@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        Presumably such a site would be visually obvious as parody. Having it give jokey answers in as a caricature would be one thing. If you dressed it up as a professional legal advice service for opinions on criminal law from Alan Shore, that could be problematic.

        At a certain point of information sharing, we should want a high bar for the ones providing the answers. When asking nuanced questions, we should want for the answer to come from knowledge, not memory. I made an example in this other comment.

        I’m not sure I agree with your ‘right answer’ bit. Personally, I’d prefer dumb people to be protected in a similar way that I want the elderly protected from losing their savings from an email scam.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          I promise you, the result of this will be unlimited free speech for corporations and their LLMs, with limited and regulated free speech for you. Save or favorite the comment.

          It’s the same “protect the children” anti free speech advocacy in a different wrapper, but more appealing to this audience because “llm bad”.

          They’re using your emotional response to not liking LLMs as a tool to trick you into giving away your rights.