Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.
Even yesteryear’s code completion systems (that didn’t rely on LLMs) are technically speaking, AI systems.
While the term “AI” became the next “crypto” or “Blockchain”, in reality we’ve been using various AI products for the better part of the past 30 years.
You mean code completion that just parses a file into an AST and does fuzzy string matching against tokens used to build that AST? I would not personally classify that as AI. It’s code that was written by humans and is perfectly understandable by humans. There is no probabilistic component present, there is no generated matrix, there’s no training process, it’s just simple parsing and string matching.
It’s early and I’m tired and probably in a poor mood and being needlessly fussy, so I apologize if this completely misses the point of your comment. I agree that there’s other stuff we’ve been using for ages which could be reasonably classified as “AI,” but I don’t feel like traditional code completion systems fit there.
AI doesn’t have to be probabilistic, a classical computer science definition of AI states that it has to be an actor that reacts to some percepts according to some policy
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.
Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
I’m sure everyone has always explained this to you given the number of down votes, but algorithms aren’t equal to AI.
Ever since the evolution of AI people seem to have lost the ability to recall things prior to 2019.
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
Jesus fuck that’s some goal post moving.
Here is a frog, please help me split its hairs
The seal looks like this:
Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.
Even yesteryear’s code completion systems (that didn’t rely on LLMs) are technically speaking, AI systems.
While the term “AI” became the next “crypto” or “Blockchain”, in reality we’ve been using various AI products for the better part of the past 30 years.
“AI” has become synonymous with “Generative AI”
You mean code completion that just parses a file into an AST and does fuzzy string matching against tokens used to build that AST? I would not personally classify that as AI. It’s code that was written by humans and is perfectly understandable by humans. There is no probabilistic component present, there is no generated matrix, there’s no training process, it’s just simple parsing and string matching.
It’s early and I’m tired and probably in a poor mood and being needlessly fussy, so I apologize if this completely misses the point of your comment. I agree that there’s other stuff we’ve been using for ages which could be reasonably classified as “AI,” but I don’t feel like traditional code completion systems fit there.
AI doesn’t have to be probabilistic, a classical computer science definition of AI states that it has to be an actor that reacts to some percepts according to some policy
We used to call the code that determined NPC behaviour AI.
It wasn’t AI as we know it now but it was intended to give vaguely realistic behaviour (such as taking a sensible route from A to B).
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
Used to?
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
Using ai to generate samples to get a framework of the product would be permitted or not? Is placeholder generation allowed?
Since you would never see it that’s pretty much irrelevant. Clearly this is about AI generated art and AI generated assets
Whether or not you use AI to grey box something is a pointless distinction given the fact that there’s no way to prove it one way or the other.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.