

Public mod log is a big accountability improvement over the absence of information Reddit has


Public mod log is a big accountability improvement over the absence of information Reddit has


I’m not going to dredge up the reddit threads providing evidence of it, but afaik there really are popular discord groups with the express purpose of brigading AI users, and I think the people here overtly defending the practice probably know it’s a real thing.


I didn’t learn to program using AI, so I don’t know all the details of how it would go for an amateur in the process of learning, but I have incorporated it into my work, so I know it can be very useful and save a lot of time, and that isn’t just about generating code. If you want to plan out how to debug something, you can get solid guidance. If you want clarification on what an unclear part of a tutorial means, you can get that. The more introductory the topic, the better and more reliable the explanation. I remember when learning spending a lot of hours just staring at a screen being completely lost on what to do next to debug something. I’m assuming you haven’t used it for coding very much? How can you be so confident it would be useless for them, isn’t this just speculation?
Anyway, this is all kind of beside the point. If it’s not useful, people won’t use it, and there’s no need to be angry about its use. If it is useful, it can be used to assist making games that are worth playing, and people shouldn’t be attacked for that.


“illegal = unethical” is a fascist take
That is not why I’m mentioning it, I agree that legality and ethics are separate. The point is that regardless of who is right about the ethics of this, applying vigilante enforcement to this kind of situation is unhinged, and signals about whether something is ok to do like legality do matter for that. If such popular enforcement is ever justified, it’s in situations where people are getting hurt where there is little ambiguity and clear malice, that’s absolutely not the case here.


I promise you, none of what I write here is AI, I’m against doing that


You’re kind of right, in that it’s not a total solution right now and you probably won’t be able to vibe code a whole game (except a really simple one maybe) with no knowledge. But that doesn’t mean it couldn’t lower the skill floor for someone. I’m assuming the person in my scenario would be also using an engine like Unity or Godot, maybe asking the AI to walk them through how to do what they want, write simple scripts and explain/suggest syntax. That shouldn’t have too much risk of generating inadvertent backdoors, and I think LLMs are pretty good at explaining basic code. Game engines already enforce the basic design structure, which will make it easier to avoid big unfixable mistakes and do everything in small pieces a LLM is less likely to fuck up.
The same is true with using it for art; you’re right that a lot of AI art on Steam is obvious and looks the same, but really good AI assisted art isn’t. The amount of skill and effort required for that is not zero, but is less than it might be otherwise. There are probably a lot of games out there where you just can’t tell, and because there’s so much fear of backlash it just isn’t disclosed.


Games bring together a lot of different mediums and require a diverse set of skills. So for instance someone might be great at drawing, and have a great idea for a game that uses their art, but they have a hard time with coding, and use AI to simplify that part of it for them in a way that’s more flexible than some other more restrictive solution like RPG Maker, which might make it closer to their vision for the kind of game they wanted to make. I think such a game could be worth playing, assuming the person making it cares about what they are making and puts their own work into it.


I’m talking about the ethics
You’re talking about your supposed right to enforce your idea of ethics on people who don’t agree with you, in a situation where there is no universal consensus, there is no law backing you up, and all supposed harms are abstract, indirect, and essentially a dispute about market competition.
Just because they may have no ill intent is irrelevant, it only speaks to their ignorance on the matter.
“I’m sorry officer I didn’t mean to speed, I had no ill intent”. Ok, you’re still getting a ticket. Ignorance is no excuse.
It matters because it’s one clear reason why the people harassing them are assholes. Pretty different from a situation where someone has violated an established law very closely linked to putting people at risk of direct physical harm and that law is being enforced.


the models are absolutely trained on stolen art
Downloading isn’t stealing, and in this case the law doesn’t agree with you either, nor does Steam; games developed with AI are legal and allowed. You’re entitled to your opinion about the ethics of it, and I think it’s fine if people want to only buy games without AI, but this is an incredibly petty way to rationalize organized harassment against people with no ill intent trying to realize their dreams. The only reason anyone goes after them is because they are softer targets than any of the billionaires and corporations doing actually questionable things with the technology.


I don’t think using AI to help make a videogame is in any way nefarious or misuse, especially for smaller developers who wouldn’t have the resources to make the game they had in mind otherwise. They don’t deserve to get review bombed or have nasty messages left on all their social media by organized discord groups just because of that, and it’s understandable they’d be worried about it.


I think maybe they wouldn’t if they are trying to scale their operations to scanning through millions of sites and your site is just one of them


Maybe because of all the brigading/harassment campaigns? If it weren’t for that I’d think this is totally fine, since it’s good for people to be able to know more about what they’re buying.


I haven’t played neopets, is there something about its design that would make the situation better than roblox?


That’s literally what the comment above it was doing too though. It’s a very common anti-AI argument to appeal to social proof.


We can’t afford to make any of this. We don’t have the money for the compute required or to pay for the lawyers to make the law work for us
I don’t think this is entirely true; yeah, large foundational models have training costs that are beyond the reach of individuals, but plenty can be done that is not, or can be done by a relatively small organization. I can’t find a direct price estimate for Apertus, and it looks like they used their own hardware, but it’s mentioned they used ten million gpu hours, and GH200 gpus; I found a source online claiming a rental cost of $1.50 per hour for that hardware, so I think the cost of training this could be loosely estimated to be something around 20 million dollars.
That is a lot of money if you are one person, but it’s an order of magnitude smaller than the settlements of billions of dollars being paid so far by the biggest AI companies for their hasty unauthorized use of copyrighted materials. It’s easy to see how copyright and legal costs could potentially be the bottleneck here preventing smaller actors from participating.
It should benefit the people, so it needs to change. It needs to be “expanded” (I wouldn’t call it that, rather “modified” but I’ll use your word) in that it currently only protects the wealthy and binds the poor. It should be the opposite.
How would that even work though? Yes, copyright currently favors the wealthy, but that’s because the whole concept of applying property rights to ideas inherently favors the wealthy. I can’t imagine how it could be the opposite even in theory, but in practice, it seems clear that any legislation codifying limitations on use and compensation for AI training will be drafted by lobbyists of large corporate rightsholders, at the obvious expense of everyone with an interest in free public ownership and use of AI technology.


But we can’t afford to pay. I don’t think open models like the one in the OP article would be developed and released for free to the public if there was a complex process of paying billions of dollars to rightsholders in order to do so. That sort of model would favor a monopoly of centralized services run only by the biggest companies.
I am thankful for the safety feature where locking the lid also depresses a button that allows the food processor to operate, but I also keep it unplugged when the lid is off for an extra layer of redundancy.


TikTok
I think you’re always going to have problems with a lack of authenticity on platforms where opaque algorithms do all the work of deciding what gets popular and what gets shown to who.


I like it, more people should adopt unusual typing quirks imo
Barely even trying to pretend to be honest