Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 4 Posts
  • 158 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • Note that there’s more than one model to do pixel art and there’s pixel art LoRAs that do a decent job. There’s loads of flexibility when generating this kind of thing.

    Also, you can just tell it to generate a thousand over like 10 minutes and pick the best one and use that as a base to improve upon. AI is just a single tool in the workflow.

    I also want to point out that not everyone can just pay someone. Don’t be paternalistic: If people want to use AI in their workflow for any reason that’s their concern. To angrily throw your hands in the air and say, “I’m not touching it because AI!” is like giving free money to the big publishers.

    You’re setting a completely unnecessary high bar, “you must be this rich to ride.”


  • This is my take at well, but not just for gaming… AI is changing the landscape for all sorts of things. For example, if you wanted a serious, professional grammar, consistency, and similar checks of your novel you had to pay thousands of dollars for a professional editor to go over it.

    Now you can just paste a single chapter at a time into a FREE AI tool and get all that and more.

    Yet here we are: Still seeing grammatical mistakes, copy & paste oversights, and similar in brand new books. It costs nothing! Just use the AI FFS.

    Checking a book with an AI chat bot uses up as much power/water as like 1/100th of streaming a YouTube Short. It’s not a big deal.

    The Nebula Awards recently banned books that used AI for grammar checking. My take: “OK, so only books from big publishers are allowed, then?”






  • In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn’t actually produce any evidence of infringement beyond, “Look! This passage is similar.” They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.

    In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn’t meet the legal definition of infringement.

    Basically, the courts ruled in both cases, “AI models are not just lossy/lousy compression.”

    IMHO: What we really need a ruling on is, “who is responsible?” When an AI model does output something that violate someone’s copyright, is it the owner/creator of the model that’s at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as “distribution” under the law? I mean, I don’t think it does because to me that’s just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they’re violating copyright.

    Even then, there’s differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.

    You know what? Copyright law is way too fucking complicated, LOL!




  • but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.

    No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.

    What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There’s a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)… Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It’s exactly the type of thing they don’t want to happen!

    The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck. This is false. It’s flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!

    Here’s the part I don’t get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?

    They shouldn’t! The only reason why articles like this get any attention at all is because it’s rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don’t get it. If you don’t like it, just say you don’t like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?

    Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.

    Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we’ve got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.

    Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!

    Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it’s true). So if you’re about to say it’s bad for the environment, I expect you’re just as vocal about streaming video, yeah?







  • The real problem here is that Xitter isn’t supposed to be a porn site (even though it’s hosted loads of porn since before Musk bought it). They basically deeply integrated a porn generator into their very publicly-accessible “short text posts” website. Anyone can ask it to generate porn inside of any post and it’ll happily do so.

    It’s like showing up at Walmart and seeing everyone naked (and many fucking), all over the store. That’s not why you’re there (though: Why TF are you still using that shithole of a site‽).

    The solution is simple: Everyone everywhere needs to classify Xitter as a porn site. It’ll get blocked by businesses and schools and the world will be a better place.




  • Working on (some) AI stuff professionally, the open source models are the only models that allow you to change the system prompt. Basically, that means that only open source models are acceptable for a whole lot of business logic.

    Another thing to consider: There’s models that are designed for processing: It’s hard to explain but stuff like Qwen 3 “embedding” is made for in/out usage in automation situations:

    https://huggingface.co/Qwen/Qwen3-Embedding-8B

    You can’t do that effectively with the big AI models (as much as Anthropic would argue otherwise… It’s too expensive and risky to send all your data to a cloud provider in most automation situations).