I have been a sponsor on Patreon almost since the account was opened (maybe 4 months in). It’s my longest-running Patreon sponsorship.
I’ve gone ahead and cancelled. Many thanks to the developers, sorry it had to end like this.
Sincerely… if you can give a single shit about ai in code, you should be able to tell it was used. If you cannot differentiate human from ai authored code, you do not have a seat at the table. jeer from the soap boxes. code is not art. code is code. get over it. does it compile or run and do the thing, cool, fuck cares who or what wrote it. clutching pearls yall cant even define.
The Lutris team is small, not corporate, not speed obsessed, etc. I’m inclined to trust them to be among those developers who can use generated code without slopping nonsense all over a code base they know they will probably be stuck maintaining.
I would never use the product, just for that very line…
People use LLMs to code now, this is not news. Why is claude taking credit in the first place?
Anything generated by an LLM cannot claim copyright, per supreme court rulings. So it is critical to attribute the portions of code that cannot be licensed.
This… is incorrect. Generated code can and has been copyrighted, but not by the model generating it. Humans can get copyrights, digital entities cannot (nor can your pet monkey.) Now, can a human copyright code they did not author? Yes, absolutely. Courts only care that a human had a hand in as little as refining the output or making selections for the agent. Copyright claims look for exercised creative judgement and infringement on existing copyrights.
It’s not my decision wether lutris has ai code in it or not. The maintainers and contributors can decide what works for them, that’s how open source works. I never found a use for lutris and maybe that’s why I don’t care.
I don’t think this is going to go the way they thing it will.
deleted by creator
Someone please fork Lutris so we can make a sanitary version without this filth !
Whether or not I use Claude is not going to change society
This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)
I can’t fix the problem, therefore I’ll be part of the problem.
At my job we have been told how we have to start using AI more. I can’t really see any point. The only tasks AI can help me for are pointless tasks from HR that shouldn’t exist in the first place. Monthly forms with questions like “how are you feeling emotionally”, used to take me ages to come up with corpo bullshit friendly answers but locally hosted deepseek does it in seconds.
When my work enabled Gemini, I asked it how to disable it. It said it couldn’t help me and asked if I had another question. I didn’t.
That’s the only interaction I’ve willingly had with it.
The HR department will see that it’s not quality human HR-slop and the thought police will be with you shortly
Oh LLMs are great at writing HR slop
But then there’s no suffering
In my experience, AI models are fairly good at contextual search. That’s the only thing I use them for.
Yes, if we had documentation then I suspect AI tools could be good for finding information in that.
Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.
I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.
How does transferring work?
I only have 2 or 3 things in lutris.
I just did it manually, pointing faugus at the old prefixes and setting the launch options the same
Sick. Thanks. I’ll do the same.
Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.
it is another thing to act with disrespect to everyone with a different opinion.
Unless that opinion is ‘I like using AI’, then they deserved the disrespect.
virtue ethics > utilitarianism
Utilitarianism really falls at the first hurdle of any kind of evaluation of a moral system.
It has no real prescriptive power because it demands you be able to correctly foresee the outcome of your actions, something literally addressed by “The road to hell is paved with good intentions”, an adage of at least 400 years ago, and yet people will still gravitate towards it as if society did not explicitly caution us about that mindset forever now.
At this point I can’t help but look down on those who genuinely identify as utilitarian as either too young, too stupid, or actively malevolent and trying to find a way to justify their bad behaviours as errors rather than malice or negligence.
I’d offer you a counterpoint (ignoring the issue with Lutris and AI for a minute):
If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them? If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.
It’s of course completely fine to not be utilitarian, but trying to claim that all utilitarians are either stupid or evil is just incorrect.
ignoring the issue with Lutris and AI for a minute
Please by all means, I ignored it in the first place, I find this way more interesting.
If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them?
Well, this is only half the problem. It’s a bad system because it demands the impossible of you (i.e. accurately predict the future) but it also has a really narrow interest in the dimensions of human morality.
To directly answer the question however: you judge them by a set of principles, whichever you deem right, that you apply consistently across choices.
When it comes to inter-personal choices, the vast majority of all questions can easily be answered by asking yourself “am i betraying some explicit or implicit bond of trust with someone (who has not done so themselves) by doing/saying this?” and if you are, you just stop.
And to be clear, I don’t claim to follow this principle 100% of the time, I am not a saint, but that to me is the guiding principle when there are stakes to my behaviour, and it has not failed me yet.
If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.
(Emphasis added)
At its core, the idea of utilitarian morality is to “maximise utility”, that is to do whatever does the most “good” to the highest number of people.
This is, IMO, a terrible metric, and as a deontologist I am perfectly happy reaching a “worse” outcome by it.
It is not particularly hard to see how, by applying this metric, you can justify any kind of scapegoating, abuse, and/or undue leniency on people that would deserve harsh punishment in any deontological or virtue based system, as soon as enough “good” is produced through it.
There is a very dark, but apt, joke about this kind of approach to morality: that 9/10 people involved in it endorse gang rape.
To me, morality is a qualitative assessment, not a quantitative one.
It does not matter how many perpetrator lives will be ruined if they have earned their punishment, and it does not matter how much happier they would be to get away with the crime than the victim would suffer, comparatively.
To do anything else would be to relinquish morality to the whims of the masses, because it implies that there is a threshold past which the abuse of the few becomes negligible due to the benefits it brings to the many.
trying to claim that all utilitarians are either stupid or evil is just incorrect.
To be fair I also stated they can be naïve; I was one too in my youth, until I learned and understood better.
I’m now assuming it all is and deleting Lutris.
What a moron.
Oh yeah. Here’s another nugget:
Sometimes, I generate some code with Claude and commit by hand
Sometimes, I write code manually and ask Claude to commit
Sometimes, I ask OpenClaw to generate some code, which doesn’t put the Co-Authorship
Sometimes, the whole thing is AI generated from end to end
This is also a somewhat recent addition to Claude Code. I was kinda surprised when I first noticed it but didn’t think much of it, I was like “meh, I guess we’re doing that now, whatever, some people might take issue with it, whatever”. Also, do keep in mind that I love trolling people coming in my projects to complain about my methods.
For those who are anti-AI, it’s a safe assumption that any addition to the project has had some kind of AI interaction during the development process.
https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355
Sometimes, I ask OpenClaw to…
This person should not be trusted with anything.
That is the real shame in all this. I’m certainly not updating lutris any more, because there is no way of knowing what you will install on your system.
You can trust humans (as in “trusting is an option”). You can never trust an LLM. And admitting that there might be unsupervised commits, being installed on possibly thousands of PCs is terrifying.
Glad I use Heroic instead. Time to check what their AI policy is.
Based on some PRs, they’re using github copilot to help with reviews but are generally against vibe coding
💯 this. I don’t mind using an LLM for certain tasks. We all do at the end of the day. However, OpenClaw is a different topic. This is just dangerous.
So Trumps gonna give him the nuclear launch codes any minute now is what you are saying?
Now I’m really worried this software can wipe out my home directory
They are free to do what they want to on their repo.
We are free to fork if need arises.
Personally I don’t like projects not showing what AI has made. And most of Claude was made on stolen code. Its against the open source license they themselves use https://github.com/lutris/lutris/blob/master/LICENSE
But almost no one actually enforces the license until the big companies show up. I hope they change their minds, but until then, im going to stop using/contributing for a while.
We are free to fork if need arises.
…and how do you ensure your fork does not contain a single commit involving even a single line written by Claude? If you can’t, then isn’t your fork slop by default?
And most of Claude was made on stolen code.
Sure, it learned to code by reading lots of code, most of it just publicly available online for anyone to read and for anyone to learn from but not explicitly licensed for a machine to read it and learn from it. I doubt it’s possible to teach an ML system (or for that matter a human being) how to code without reading lots of example code. And any code you’ve ever read has an impact on any code you write afterward (same as any other creative endeavor), that’s why clean room design as a defense against copyright infringement is a thing that exists.
Does anyone know which was the last version before the dev started shoveling slop in to the repo? The utter dipshit invalidated even the ability to license after that point, those releases are wholly worthless.
in 5 years from now there’s going to be totally coevolved but unique seed-lines for software. the once with AI, and the once without. how can you distinguish them? did the human that said it wrote them really write them? these problems aside, i suspect it will be forced to happen just from a security viewpoint, big companies won’t be able to get any kind of insurance anymore running AI-infested code.
It’s like non-radioactive steel that has to be recovered from sunken warships
That last bit needs to hit sooner.
Fork it and call it Ludique, meaning fun in French.
it’s more nuanced than that. Claude is made from stolen code, but it generally isn’t going to copy its training data verbatim (unless specifically told to). so copyright wise it’s more grey than strictly wrong. and though claude is made from stolen code, lutris developers are writing something they give off freely to the world, they are not profiting from the stolen code.
does this make it ok? i don’t know. what if they use an open weights model rather than a closed one? would that be more acceptable?
No, open weights changes nothing. Using stolen material is. Especially for a GPL project, a licence normally used to scare off corporate vultures. Why should anyone respect lutris’ licence, when they gave up on the authorship of their own product?
“This works perfectly, which is why I’m removing all ways to audit what it has contributed.”
“because that’s the only way to use it without being harassed online”
I disagree with his reasons for removing it, but they are pretty clear.
The downvotes are only making your argument for you, lol
Downvotes are pretty much death threats, amirite?
My Lemmy karma :(
Larma.
You’re either new to social media or being deliberately dense if you don’t understand the correlation between unpopular opinions, downvotes and harassment.
You can harass someone in many ways, not simply by threatening to kill them.
Here’s my issue with this specifically. It makes Lutris very vulnerable to being considered entirely public domain:
“AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.
I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.
I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.
If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.
Good Sire, if we are talking about only the US, then that does not matter at all. Existing copyright law and established precedents (without involving AI) already covers this. The copyright of software is handled like that of literature, so the actual content is copyrighted. More specifically the sequence of words. In order to violate the copyright of a protected work, one just has to reproduce this sequence. It is not relevant, if it was reproduced by an AI, a human, God or your cat (:D). The only exclusion to this is fair use. Whether fair use applies must be considered by a case by case basis. There are four factors that are used in deciding whether it falls under fair use. And that is considering that portions of that code are not patented. If they are, then you are screwed no matter what (unless you are allowed to use that code).
Anyhow, you are opening yourself up for litigations for sure.
Now, is this a problem? Probably not. Copyright infringement is actually very very hard to spot, especially without automated tools (looking right at you, YouTube). Even if it is spotted, the owners of the copyright must use resources in order to enforce it. Considering that most of the code used in the training data is open-source, most of these owners won’t have these resources or at least aren’t using them (which is sad, because that also applies to the infringement of companies as well). You cannot lose, if no one sues. Whether you should risk it, is anyone’s decision to make.
For unprotected code… I guess, you are right. It could be one way or the other, but it does not really matter that much. At worst, people can use your code without adhering to your license. That would not mark the end of an project, the former definitely would.
Also on another note: Using copyrighted material in the training data of AI is considered fair use.
deleted by creator
There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.
As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.
In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.
Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.
You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.
That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.
Also, that’s potentially copyrightable. That hasn’t been settled.
deleted by creator
US defaultism strikes again
I said, in the issue, I was talking about the US.
Glad it applies worldwide /s
Slop can’t be copyrighted, great. We don’t want slop.
Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.
TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.
Quoting further:
Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.
For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]
Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.
Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.
If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.
Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.
OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.
Been chewing this since yesterday. Okay, here is my two cents:
- yes, what LLM companies are doing is a problem. So dropping anything that has anything to do with their products is a sane way to make a statement
- yes, LLMs can be used effectively in development. Whether Lutris author has been using them well - I don’t know. Guess won’t bother to check either, have other things to do
- yes, doing the stunt with “good luck guessing what is what” is bullshit
Net total, given I’ve already dropped GNOME because of their culture: guess now I am dropping Lutris. Not because of AI per se, but because of “fuck you” move
but because of “fuck you” move
The guy removed the attribution because he is being harassed.
The ‘fuck you’ move is the people harassing an open source dev. Those people are the source of the bad behavior, not the guy who volunteers his time maintaining an open source project for everyone to use.
The anti-AI crowd is toxic and need to fuck off. It’s one thing to have an opinion, it’s another thing to harass volunteers because they’re using tools that the crowd has a hateboner for.
The guy removed the attribution because he is being harassed
That may be, and he never mentions this in the now famous comment. Or was the message about Lutris being slop a harrasment? (question is genuine, I am somewhere in autistic spectrum)
The ‘fuck you’ move is the people harassing an open source dev
That is not a decent behaviour, no questions. His doing something preemptively in regards to something that he says he doesn’t see as an issue - that’s some bullshit. I am not against him using llm tools, but I am not ok with someone who can’t just say “this is how I am doing things, these are my reasons and they are enough for me, so fuck off (and/or be banned, if GitHub has such a thing)” and instead goes on with some ill-reasoned tyrade. Before anyone brings this up: yes, he also mentions depression, which is no small thing, so demanding crystal-cut reasoning is also bullshit, but that is not my point, the latter being that the guy needs some care, and doesn’t look like he understands that. Which means things are heading towards a disaster, sadly
That may be, and he never mentions this in the now famous comment. Or was the message about Lutris being slop a harrasment? (question is genuine, I am somewhere in autistic spectrum)
There was a lot of toxic conversations in Discord and on the forums for a while prior to his blowing up.
The dev hasn’t made a secret of his mental health struggles and he probably could have handled the situation a bit better. But, in the end, he’s a guy making a tool that helps the entire community and even if you think AI tools run on the blood of sacrificed puppies, it isn’t okay to harass someone personally.
Argue about water usage or power usage, copyright issues, etc… but as soon as they start attacking the person directly it has gone way too far. His response could have been better, but the blame should be completely on the anti-ai harassment squad and not the lack of PR skills of a volunteer developer.
Blame for different things:
Running around and cursing anyone using llm - that’s an idiotic thing to do, and he is not the one doing it, of course
not the lack of PR skills of a volunteer developer
That’s not what bothers me
But, in the end, he’s a guy making a tool that helps the entire community
While sacrificing his own life (time, energy, emotions, all it takes to keep doing it). That’s not worth it, damn it. Doing something just to say “good luck figuring this out on your own, if it bothers you that much, you stupid fucks” is a priority shift from “what is good for me/project/community” to “what to do with project to stop this toxic shit”. My answer is “Do nothing with the project. Get them to fuck off or get yourself out of their reach”. And my requirement of anyone in charge of anything is clarity
Edit: word “sacrificing” is important. Not sharing out of abundance, not serving out of devotion, but cutting from what he has and needs for the benefit of others
Oh I agree he’s handled it badly, I just don’t fault him much.
He’s just one guy who’s suddenly the target of tens or hundreds of people who’re directly harassing him everywhere that it is possible. He shouldn’t be put in that position and, as bad as his response is, he’s doing it in the context of a pressure and harassment campaign… not because he’s suddenly developed animosity for the community.
His response is bad, but the people creating the situation are the ones that shoulder the blame… imo.
On that we agree completely. Screaming “N is bad because llm was used to build it” is utter idiocy
Net total, given I’ve already dropped GNOME because of their culture
what was wrong with gnome’s culture?
I use KDE BTW, I don’t want a fischer price/mac lookalike ui
- You want customisation? Use extensions
- We broke extensions, because
- Also, no API for extensions. Patch our code manually
No integrity in that see I, so drop them I do (Yoda voice)
Also refusing to make literally any compromise on cross-desktop protocols that everyone else wants, stalling progress for years
Gnome: Pissing off its userbase since 2011
Last point is enough for me to drop them for good
What’s the replacement for lutris?
I’ve already replaced lutris with Heroic launcher + proton and wine-ge a year ago.
Lutris install script already didn’t work >50% of the time for me and battle.net always completely corrupts and messes up after a time on lutris and I have to reinstall it every few months, but has been going a year strong on heroic.
You can also always look at the lutris install scripts and install those components in heroic via winetricks. They were made by the community anyway.
For games. I have replaced it with steam as you can load none steam games and run them under proton. I have had great success. Outside of games I’m not sure.
I’m pretty sure neither is pure? I mean, you don’t have to necessarily limit steam to games. May as well try non games and see what happens.
Build one yourself
Didn’t look for one yet. As I understand, there is a thing called bottles that is worth a try











