TL;DR: The advent of AI based, LLM coding applications like Anthropic’s Claude and ChatGPT have prompted maintainers to experiment with integrating LLM contributions into open source codebases.

This is a fast path to open source irrelevancy, since the US copyright office has deemed LLM outputs to be uncopyrightable. This means that as more uncopyrightable LLM outputs are integrated into nominally open source codebases, value leaks out of the project, since the open source licences are not operative on public domain code.

That means that the public domain, AI generated code can be reused without attribution, and in the case of copyleft licences - can even be used in closed source projects.

  • slacktoid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    20 hours ago

    The way I see it is, and not saying this isn’t a valid concern, it that it still doesn’t help with code maintenance. Just cause you can create it doesn’t mean you can maintain it. Many companies moved to open source (not free software) cause of the financial incentives of security and long term maintainability of the codebase. Think of how much better say tensorflow and pytorch got because it was opensource. The engineers of Google and meta could make it what were their reasons for open sourcing it? I doubt these reasons have changed with ai. Cause nothing beats free Q&A testing.