There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.
As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.
Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.
You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.
Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.
TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.
Quoting further:
Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.
For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]
Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.
Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.
If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.
Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.
OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.
“AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.
I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.
I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.
If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.
Here’s my issue with this specifically. It makes Lutris very vulnerable to being considered entirely public domain:
https://github.com/lutris/lutris/issues/6538
There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.
As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.
https://www.morganlewis.com/pubs/2026/03/us-supreme-court-declines-to-consider-whether-ai-alone-can-create-copyrighted-works
In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.
Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.
You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.
That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.
Also, that’s potentially copyrightable. That hasn’t been settled.
deleted by creator
Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.
TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.
Quoting further:
Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.
If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.
Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.
OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.
Glad it applies worldwide /s
Slop can’t be copyrighted, great. We don’t want slop.
“AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.
I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.
I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.
If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.