

I struggle to think of anything that AI is good enough (won’t make mistakes) to use in production. It’s fine if you’re making something that doesn’t matter, or in which a mistake wouldn’t be a big deal.


I struggle to think of anything that AI is good enough (won’t make mistakes) to use in production. It’s fine if you’re making something that doesn’t matter, or in which a mistake wouldn’t be a big deal.


Ok, but the AI actually is bad. Like, it’s bad at writing code. The code I reviewed was sloppy at best. If I got that from a high school student for an assignment, I would give it a B. If I got that from a college senior, I would give it a D. If I got that from a junior dev, I would give them a serious lecture about testing their code.


Heroic + Bottles seems to cover everything I need, so I don’t need Lutris. I’m sorry if those don’t cover all of your needs.


There are lots of legal problems with accepting any AI generated code, regardless of whether it’s bad quality or not. For one, the AI tends to reproduce copyrighted code without a proper license:
https://youtu.be/xvuiSgXfqc4?t=247
Another is that AI generated code is not copyrightable, so even if it’s not copying someone else’s code, it can’t be licensed under an open source license.


What I’m saying is it’s probably not worth forking Lutris because it’s bad code. It would be better to just switch to a better alternative.


Last week, he was saying that everyone was a bullshitter because no one could point to any low quality code the AI produced. So, I reviewed his commits and of the four I reviewed, two had bugs.


Based on the thread I originally linked, and the dev’s response, with regard to Lutris, I think the answer is A.
I bet you’re just bubbling with excitement.


“that’s a pretty good deal” is him saying he’s fine with those bugs.
He’s also heavily downplaying the severity of that bug. If a user hit that bug, it would keep copying that AppImage file over and over until it filled their disk and crashed the app. Then the user would have to figure out what happened and where all those duplicates were to fix their system, all while things were falling apart because nothing could write to the disk.
Many systems cannot successfully boot if the disk is full, so those users would probably have to reinstall their system if it crashed or they rebooted, and they didn’t know how to navigate a root shell. Even if the system didn’t crash, many apps won’t start if the disk is full, so the user is just going to have a really bad time overall.
Later in the thread, another user defended the severity of the bug by pointing to other bugs that Lutris has shipped which have damaged their users’ systems.
It’s also worth noting that I only reviewed four of his commits and found two bugs, one severe. So the frequency of these bugs seems much higher than without AI tools. Who knows how many others the AI has introduced, but I’m not going to review all of his slop if he can’t even be bothered to do it properly before he commits it.


Thanks. :) So not technically Scheme, but a fork of Scheme.
Script-Fu is probably the oldest binding system for extending GIMP. It is also a Scheme variant, which evolved independently for many years now.


Doesn’t GIMP use Scheme for its plugins?


According to Flathub it gets over 75k downloads a month.


He’s made it clear that this discussion is invited, so I don’t feel like I’m overstepping by continuing to engage. If he states that I’m no longer invited to participate or the discussion is no longer welcome on his platform, then I’ll stop.
According to the US Copyright Office, the AI is not just a tool. So, with regard to copyright, the code it produces is not yours, and is not protected in any way by your license. As stated before, he is aware of this now and is fine with the new code in Lutris not being protected by the GPL.
I agree that I’m not entitled to any of his work (as long as he doesn’t violate any of my copyrights), and that he is not obliged to provide me with anything. (Although technically the license he chose to use does entitle me to his work, but whatever.) But that knife cuts both ways. He is not protected from me spreading the word about his use of AI generated code, and especially not protected from me finding bugs in his project written by the AI and telling people about those.
I’m not interested in starting a fork. First, I’m not a Python dev, and second, I already manage several large open source projects, and one closed source one.
There are also other projects that are alternatives to Lutris. As long as Lutris is being filled with AI code, I will recommend those alternatives, and try to get others to do the same.


He very much was uninformed. Most devs don’t understand copyright law. He was no exception. He didn’t understand the legal implications of using the AI tools until they were explained to him in that discussion. He also didn’t believe that the AI generated code had bugs until they were pointed out to him. He called people “bullshitters” for saying it did.
It’s not condescending to describe someone as uninformed. Being uninformed isn’t a moral failing or shameful. We all start off uninformed. I had to read a shit ton of copyright and patent law when I was going through the patent process, because I too was uninformed. There is no shame in that.
What really matters is what you do once you become informed. He is now informed, and has stated he doesn’t mind the code in Lutris being non-copyrightable and therefore unlicensed. He has also stated that he doesn’t mind the bugs it produces. I hope these are just knee-jerk reactions and that he changes his mind.


I have no intention of making him miserable. I don’t think he’s a bad person, unlike some other people in that discussion. In my opinion, he was uninformed about the dangers of AI generated code. He was also uninformed about the quality of AI generated code, thinking that it wasn’t introducing bugs. Now he’s informed, but he is still going to use the AI. I’m hoping that’s just because he’s being stubborn. But, that’s something that people should know, so they can choose whether or not to continue using Lutris.
I was and still am a fan of Lutris, but I have switched to Bottles. Bottles is still missing some features that Lutris has, but I just can’t trust Lutris’ code and devs anymore. It makes me really sad, because the project itself is really cool.
I honestly, genuinely hope that he will see what a bad idea the AI code is before the project reaches an unmaintainable state.


I’m involved in that discussion because I like Lutris and don’t want the project to suffer because of the use of AI tools. The developer challenged people in that discussion (myself included) to find low quality code that had been pushed recently from the AI. I did. Two of his last four commits introduced bugs.


In what way is what I said false?
Do you think he’s not cool with AI generated bugs in Lutris? Do you think the code isn’t full of bugs? Do you think the reason he’s cool with AI generated bugs isn’t because his code is already full of bugs?
It certainly seems like all of those elements are in what he said. He knows that the AI is introducing bugs (I pointed out two bugs that it introduced in that thread), and he’s fine with it (he said it’s a pretty good deal), because the code base was already buggy before (he’s seen so much worse in code he’s shipped in the project).
He kept challenging everyone in that thread to find below average code pushed recently. I took him up on it, and looked through his last four commits (all attributed to Claude) and found two bugs. He is totally fine with that. If it were me, I would really rethink using a tool that introduces bugs in half of its commits.


In what way is what I said false? His statement describes the bugs as a “pretty good deal” because he’s seen “so much worse” in his code without the inclusion of AI. Therefore, he’s cool with AI generated bugs because his code is already full of bugs.
Good news. Hopefully they’ll get rid of those two exceptions in the future.