While I personally believe using GenAI is unethical for a number of reasons, I can see why people might want to vibe code for personal projects - where if it breaks, it is what it is and only you bare consequence.
But wasting the time of the maintainers of important games preservation software because you’ve got GenAI and a big ego is fucking stupid.
If you want to vibe code on production, go work for Microslop.

I can easily imagine vibe coders thinking of themselves as some kind of oppressed group. Deep down, they know vibe coding is problematic, but it’s SO much more attractive to make yourself feel like a victim.
untested
Require new contributors to add test code with their patch that hooks into your continuous integration system, and have part of that CI path use a code coverage tool to validate that their test code is actually executing the patch in question?
I mean, that:
-
Creates a speedbump that I’d assume that low-effort patches — LLM-generated or otherwise – won’t get past.
-
Shouldn’t obstruct someone willing to put effort into their patches.
-
Doesn’t affect long-term contributors at all.
-
Ensures that at you have at least some level of minimum test coverage. Yes, theoretically they could have not actually run their test code themselves and only have it running as part of your CI path, but even then, at least something is running it, and it’s probably going to be pretty obvious if they keep pushing a PR many times until it passes.
-
If you can get some LLM to generate test code that’s actually being run, good (well, okay, it might not be sufficient, but it’s at least clear that it’s not crashing the program or similar).
All of that is nice, and tbh should be standard for everyone anyway (in a controlled, non-public setting), but it doesn’t address the actual issue of sloperational overload.
Firstly, an unattended gated CI build step for the public is a terrible idea from a resource point of view, you’d have all of the same people submitting their unchecked hallucinated code to be run by your CI build, eating resources.
If someone in the project has to manually check the code before allowing it in to the CI build then you have the same problem as now.
Low effort sloperators will just ask the hallucination machine to generate test code to do what you specified, a likely outcome is it generates something that fails in the pipeline step.
it’s probably going to be pretty obvious if they keep pushing a PR many times until it passes.
This is important, because it’s an easy enough step to configure your local agent/harness/whatever it’s called now to use the CI build as an input to measure success, so now you have automated generation loops eating CI resource.
All this does is move the bottleneck from people resource to compute resource and the outcomes could be even worse.
The concept itself is fine, but the resource issue would stop it from being viable pretty quickly.
It might work if there was some way to prove a successful CI run using the submitters resources…somehow.
Move the compute penalty to the submission side.
Firstly you’d need a way of guaranteeing the CI run was functionally identical to the one run by the project and then you’d need a way to guarantee the results weren’t fudged, probably more things as well.
I’ve not heard of such a thing, but that doesn’t mean it doesn’t exist.
The best solution i can think of right now is social, like a trusted contributors list, people who have proven they can work within the bounds of the project rules and guidelines, possibly a recommendation system.
It’s not a great solution though and still eats resources in managing such a system.
-






