• 2 Posts
  • 418 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • If that is the case, is chardet 7.0.0 a derivative work of chardet, or is it a public domain LLM work? The whole LLM project is fraught with questions like these

    I think the reimplementation stuff is a separate question because the argument for it working looks a lot stronger, and because it doesn’t have anything to do with the source material having LLM output in it. Also if this method holds as legally valid, it’s going to be easier to just do that than justify copying code directly (which would probably have to only be copies of the explicitly generated parts of the code, requiring figuring out how to replace the rest), which means it won’t matter whether some portion of it was generated. I don’t see much reason to think that a purist approach to accepting LLM code will offer any meaningful protection.

    I’m mostly just playing along with your thought experiment. As I said, we know that projects are already accepting LLM code into projects that are nominally copyleft.

    So what though? If they aren’t entirely generated, you can’t make a full fork, and why would a partial fork be useful? If it isn’t disclosed what parts are AI, you can’t even do that without risking breaking the law.


  • but if they instead say that they copied the work into their LLM and produced a copy without protections (as chardet has done), the courts might be less willing to afford the project copyright protections if the project itself was making use of the same copyright stripping technology to strip others’ work to claim protections over copied work.

    ianal but does it even work like that? Is there any specific reason to think it does? I don’t believe you really get credit for purity and fairness vibes in the legal system. Same goes for the idea that code where it is ambiguous whether it is AI output could be considered public domain, seems kind of implausible, is there actually any reason to think the law works that way? If it did, then any copyrighted work not accompanied by proof of human authorship would be at risk, uncharacteristic for a system focused on giving big copyright holders what they want without trouble.

    the only code that may ultimately be protected is closed source code - you can’t copy it if you don’t have the source.

    There is no way, leaks happen, big tech companies have massive influence, a situation where their code falls into the public domain as soon as the public gets their hands on it just isn’t realistic. I feel suspicious that many of these concerns are coming from a place of not wanting LLM code in open source projects for other reasons, rather than the existence of a strong legal case that it represents a real and serious threat to copyleft licensing.


  • AI code damages copyleft projects no matter what - we know that some projects are already accepting AI generated code, and they don’t ask you to hide it - it is all in the open.

    I don’t see how that follows or contradicts what I’m saying though. They could hide it, easily. Even if they don’t hide it, how useful would it really ever be to only use the portions of the codebase that have been labelled as having been AI generated? Can one even rely on those labels? Making use of the non-copyrightability of AI output to copy code in otherwise unauthorized ways does not seem like a straightforward or legally safe thing to do. That’s especially the case because high profile proprietary software projects also make heavy use of AI, it doesn’t seem likely the courts will support a legal precedent that strips those projects of copyright and allow anyone to use them for whatever. So basically I’m not at all convinced about the idea that AI code damages copyleft projects, it seems unlikely to be a problem in practice.


  • The only portions of the work that can be copyrighted are the actual creative work the person has put into the work.

    Ok, but it’s not like everyone is documenting exactly which parts are generated, curated, or human written.

    Maintainers cannot prevent the LLM code from being incorporated into closed source projects without reciprocity

    Say someone incorporates GPL code without attribution, and gets sued for doing so. They try to make the argument in court that the source material they used is not copyrighted, because of AI. Won’t they have to prove that the parts they used were actually AI output for this defense to work? It isn’t like people are going around ignoring the copyright on things in general if they look like they were probably generated with AI, that isn’t enough to be safe from prosecution, because you usually can’t know the exact breakdown. It seems like preventing this loophole from being used would be as simple as keeping it ambiguous and not allowing submissions that positively affirm being entirely AI generated.









  • Makes sense, I overall agree, I’m mostly just unsure about the idea of a “snap”, “break” or gaining self awareness, as opposed to something more passive. It’s been a while since I saw the movie though and I didn’t read the book so I can’t make much of an argument about it.


  • Yeah I have no idea where if anywhere people share pirated mods, definitely would like to know, haven’t seen them on torrent sites. Beat Saber has something like this also with a different system (where somehow third party mod managers have ended up implementing this sort of check), and steam workshop as you mention. Anyway I do own Factorio (though not the expansion), it’s mostly just something that seems like a relevant factor.



  • Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:

    Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.



  • Feel free to correct me if I’m wrong, but I thought it was inherent to the terminal that you can’t position the cursor and select text using the mouse, and also inherent that there are not right-click menus.

    If you don’t want to use a mouse in your code editor that’s a valid preference, but these are very different styles of programs and exist in separate categories. Personally I was using Atom before I was using VSCodium, and I really like most design choices of the latter, it’s basically everything I always wanted an IDE to be like. Don’t want to stop using the mouse.