The GNOME.org Extensions hosting for GNOME Shell extensions will no longer accept new contributions with AI-generated code. A new rule has been added to their review guidelines to forbid AI-generated code.
Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it’s now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected
extension developers should be able to justify and explain the code they submit, within reason
I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.
You know, GNOME does some stupid stuff, but I can respect them for this.
Why? If the code works the code works, and a person had to make it work. If they generated some functions who cares? If they let the computer handle the boilerplate, who cares? “Oh no the style is inconsistent…” Who cares?
@uncouple9831
That person debugging that stuff, that’s who cares.Went would that be anyone other than the original author? This sounds like a housing service is refusing to host things based on what tool was used in creation. “Anyone using emacs can’t upload code to GitHub anymore” seems equivalently valid.
I applaud the move, but man, that’s gonna be a lot of work on their end.
Rare, so needed Gnome W
You used to be able to tell an image was photoshopped because of the pixels. Now with code you can tell it was written with AI because of the comments.
Emojis in comments, filename as a comment in the first line, and so on
I’ve been in the habit of putting the filename as first comment in most of my scripts forever. I don’t know when or why I started but please don’t make me change!
You’re absolutely right — we shouldn’t have to change our style just because a machine copies it.
They werent hiding it, they started with vibe
and from seeing quite a few slops in my time
# Optional but […]
So what does this mean? Bc like (at least with my boss) whenever I submit ai generated code at work I still have to have a deep and comprehensive understanding of the changes that I made, and I have to be right (meaning I have to be right about what I say bc I cannot say the AI solved the problem). What’s the difference between that and me writing the code myself (+googling and stack overflow)?
The difference is people aren’t being responsible with AI
You’re projecting competence onto others. You speak like you’re using AI responsibly
I use AI when it makes things easier. All the time. I bet you do too. Many people are using AI without a steady hand, without the intellectual strength to use it properly in a controlled manner
Its like a gas can over a match. Great for starting a campfire. Excellent for starting a wildfire.
Learning the basics and developing a workflow with VC is the answer.
That sounds like copium… But I’ll hear you out. What if VC? It better not be version control
Large language models are incredibly useful for replicating patterns.
They’re pretty hit and miss with writing code, but once I have a pattern that can’t easily be abstracted, I use it all the time and simply review the commit.
Or a quick proof of concept to ensure a higher level idea can work. They’re great for that too.
It is very annoying though when I have people submit me code that is all AI and incredibly incorrect.
Its just another tool on my belt. Its not going anywhere so the real trick is figuring out when to use it and why and when not to use it.
To be clear VC was version control. I should have been more clear.
Okay, that’s pretty fair. You seem to understand the tool properly
I’d argue that version control is not the correct layer to evaluate output, but it is a tool that can be used in many different ways…I don’t think that’s a great workflow, but I can conceive situations where that’s viable enough
If I were handing out authorizations to use AI, you’d get it
Banning a tool because the people using it don’t check their work seems shortsighted. Ban the poor users, not the tool.
We do this all the time. I’m certified for a whole bunch of heavy machinery, if I were worse people would’ve died
And even then, I’ve nearly killed someone. I haven’t, but on a couple occasions I’ve come way too close
It’s good that I went through training. Sometimes, it’s better to restrict who is able to use powerful tools
Yeah something tells me operating heavy machinery is different from uploading an extension for a desktop environment. This isn’t building medical devices, this isn’t some misra compliance thing, this is a widget. Come on, man, you have to know the comparison is insane.
The answer you seek is literally the post.
Excellent









