TL;DR: The advent of AI based, LLM coding applications like Anthropic’s Claude and ChatGPT have prompted maintainers to experiment with integrating LLM contributions into open source codebases.
This is a fast path to open source irrelevancy, since the US copyright office has deemed LLM outputs to be uncopyrightable. This means that as more uncopyrightable LLM outputs are integrated into nominally open source codebases, value leaks out of the project, since the open source licences are not operative on public domain code.
That means that the public domain, AI generated code can be reused without attribution, and in the case of copyleft licences - can even be used in closed source projects.



Ehhhhh, that depends on how you take it. Personally, no, I’m not very worried about the legal aspect. But,
It’s still LLMs. FOSS communities have been better than average, but that bar is a low one considering coders generally have been using LLMs most of all. And LLM usage is reckless, not to mention presently harmful in numerous ways. (And yes, this means the latest models too. “Looks good” doesn’t mean it is good.) I’d just as soon FOSS not use the tech at all.