• Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    It uses the content in a different way for a different purpose. The part I highlighted above applies to it? Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.

    • woelkchen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.

      I asked nicely to provide a quote that machine generation is also covered that you couldn’t provide and now feels the need to lash out.

      And yes, I absolutely expect that machine generation is explicitly mentioned for the simple fact that right now machine generated anything is not copyrightable at all. A computer isn’t smart, a computer isn’t creative. Its output doesn’t pass the threshold of originality, as such there is no creative transformation happening, as there is with reinterpretations of songs.

      What is copyrightable are the works that served as training set, therefore there absolutely has to be an explicit mention somewhere that machine generated works do not simply pass the original copyright into the generated work, just like how a human writes source code and the compiled executable is still the human author’s work.

      Edit: Downvotes instead of arguments. Pathetic.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        In the Office’s view, training a generative AI foundation model on a large and diverse dataset will often be transformative. The process converts a massive collection of training examples into a statistical model that can generate a wide range of outputs across a diverse array of new situations. It is hard to compare individual works in the training data—for example, copies of The Big Sleep in various languages—with a resulting language model capable of translating emails, correcting grammar, or answering natural language questions about 20th-century literature, without perceiving a transformation.

        https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf

        You can read the whole doc. The part above is cherry picked. I haven’t read through the whole thing but at a glance, the doc basically explains how it depends. If the model is trained specifically to output one piece content, it wouldn’t be acceptable.

        The waters are muddy but holy fuck does taking the copyright juggernauts side sound bloody stupid.