• 1 Post
  • 141 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle



  • I didn’t learn to program using AI, so I don’t know all the details of how it would go for an amateur in the process of learning, but I have incorporated it into my work, so I know it can be very useful and save a lot of time, and that isn’t just about generating code. If you want to plan out how to debug something, you can get solid guidance. If you want clarification on what an unclear part of a tutorial means, you can get that. The more introductory the topic, the better and more reliable the explanation. I remember when learning spending a lot of hours just staring at a screen being completely lost on what to do next to debug something. I’m assuming you haven’t used it for coding very much? How can you be so confident it would be useless for them, isn’t this just speculation?

    Anyway, this is all kind of beside the point. If it’s not useful, people won’t use it, and there’s no need to be angry about its use. If it is useful, it can be used to assist making games that are worth playing, and people shouldn’t be attacked for that.




  • You’re kind of right, in that it’s not a total solution right now and you probably won’t be able to vibe code a whole game (except a really simple one maybe) with no knowledge. But that doesn’t mean it couldn’t lower the skill floor for someone. I’m assuming the person in my scenario would be also using an engine like Unity or Godot, maybe asking the AI to walk them through how to do what they want, write simple scripts and explain/suggest syntax. That shouldn’t have too much risk of generating inadvertent backdoors, and I think LLMs are pretty good at explaining basic code. Game engines already enforce the basic design structure, which will make it easier to avoid big unfixable mistakes and do everything in small pieces a LLM is less likely to fuck up.

    The same is true with using it for art; you’re right that a lot of AI art on Steam is obvious and looks the same, but really good AI assisted art isn’t. The amount of skill and effort required for that is not zero, but is less than it might be otherwise. There are probably a lot of games out there where you just can’t tell, and because there’s so much fear of backlash it just isn’t disclosed.



  • I’m talking about the ethics

    You’re talking about your supposed right to enforce your idea of ethics on people who don’t agree with you, in a situation where there is no universal consensus, there is no law backing you up, and all supposed harms are abstract, indirect, and essentially a dispute about market competition.

    Just because they may have no ill intent is irrelevant, it only speaks to their ignorance on the matter.

    “I’m sorry officer I didn’t mean to speed, I had no ill intent”. Ok, you’re still getting a ticket. Ignorance is no excuse.

    It matters because it’s one clear reason why the people harassing them are assholes. Pretty different from a situation where someone has violated an established law very closely linked to putting people at risk of direct physical harm and that law is being enforced.








  • We can’t afford to make any of this. We don’t have the money for the compute required or to pay for the lawyers to make the law work for us

    I don’t think this is entirely true; yeah, large foundational models have training costs that are beyond the reach of individuals, but plenty can be done that is not, or can be done by a relatively small organization. I can’t find a direct price estimate for Apertus, and it looks like they used their own hardware, but it’s mentioned they used ten million gpu hours, and GH200 gpus; I found a source online claiming a rental cost of $1.50 per hour for that hardware, so I think the cost of training this could be loosely estimated to be something around 20 million dollars.

    That is a lot of money if you are one person, but it’s an order of magnitude smaller than the settlements of billions of dollars being paid so far by the biggest AI companies for their hasty unauthorized use of copyrighted materials. It’s easy to see how copyright and legal costs could potentially be the bottleneck here preventing smaller actors from participating.

    It should benefit the people, so it needs to change. It needs to be “expanded” (I wouldn’t call it that, rather “modified” but I’ll use your word) in that it currently only protects the wealthy and binds the poor. It should be the opposite.

    How would that even work though? Yes, copyright currently favors the wealthy, but that’s because the whole concept of applying property rights to ideas inherently favors the wealthy. I can’t imagine how it could be the opposite even in theory, but in practice, it seems clear that any legislation codifying limitations on use and compensation for AI training will be drafted by lobbyists of large corporate rightsholders, at the obvious expense of everyone with an interest in free public ownership and use of AI technology.