• zogrewaste@sh.itjust.works
    link
    fedilink
    arrow-up
    37
    ·
    12 hours ago

    If the llm they used was trained on the original code, the result was not legally rewritten. To change licensing without buy in from all original authors, the new code must be fully original from spec. Ignoring the legal definitions for convenience opens the door for corporations to steal open source and copyleft materials and strip away the licensing requirements.

    • hobata@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      That’s a wild claim you’re making. So far, it looks like the code is completely new, and for this case, it doesn’t really matter where it comes from. New code - new license.

      • Treczoks@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        10 hours ago

        If the LLM training data is based on / has used GPL code, this might set an interesting legal precedent.

      • mina86@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        18
        ·
        11 hours ago

        If you write new code looking at the old code in another editor window, that’s likely derivative work. If you’ve never seen the original code and are looking only at the API, that’s likely not derivative work. Determining whether the code is ‘new’ is insufficient.

      • wholookshere@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 hours ago

        okay, you have to be able to prove the LLM didn’t learn off of the original source material. Because if it is, its dertivitve work, making it subject to LGPL.

        • redrum@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          LLM is not the copyright owner, it’s a developer of the LGPL package… IMHO, it’s an obvious violation of the original developer rights.

        • hobata@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          10 hours ago

          Well, I do not have to, the burden of proof lies on the person making the claim.

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            7
            ·
            10 hours ago

            That’s valid in a debate, but not quite how courts work?

            I’m not a lawyer, just someone petty enough to read laws.

            The discovery requests in the law suit will require yo turn over all training data. From there, it will be up to the AI makers to prove that it wasn’t used, if it was fed into training data. Which if it was open source, almost certainly was.

            That as side.

            Your making an equal claim that it wasn’t. With an equal amount of proof. So what your sating bears as much weight as the other person.