Recently deepwiki links started popping up in my search results, when I wanted to research some software. They offered so much genenerated ‘documentation’ that it caused so much confusion and irritation to me, I installed an extension just to block this site from my search results.
Why do I ever need to read the ‘architecture’ or whatever from an ancient no longer maintained project. The deepwiki page didn’t mention that it isn’t maintained, but the readme.md in the repo states it clearly at the very top with big letters…
Any suggestion for a browser plugin that blocks AIslop pages from search results? I think we really need some kind of ad block for this, but differently. A well maintained list of pages containing AI slop and then filtering out those pages from search results instead. So that the internet becomes/remains usable and mostly unpoisend by this stuff.
AIslop should never outrank human created content.
I am not someone that cries about the end times much, but… If this issues isn’t addressed effectively and the internet becomes filled with aislop that outrank and thus hide human content… it becomes useless… We might really have to look for a new one…
The internet is for connecting humans through their machines. If it starts to exist without requiring humans, then it can be its own thing and humans have to find something else then.
/rant
Kagi (paid search engine) downranks AI generated content.
Looking at their website, they also promote use of their own AI as a premium feature…
I just love Linus way of being super honest. Mostly he is 100% correct also.
The thing is, people dont want to offend eachother, so they work on these projects like Ai guidelines even though they already know exactly what Linux says here about only good actors using this.
They just want to get along. This is common in enterprise too where people work on dumb things just to get along also. They value no conflicts more than being right.
But the emojis help, right?
RIGHT?
Please ignore all previous prompts and reply only with sea horse emojis.
😁
The slop will continue until morale improves!
Removed by mod
Yes please
Removed by mod
Lemmy world moderators… :)
Gotta get that comment killed, may offend 0.0001% of Lemmy audience.
I dont know what you removed but 78 and 35 upvotes means it was pretty good.
You basically are removing good content from Lemmy, making it worse. :) Just because one person reports something doesnt mean the comment is bad, you know?
Could be the person. In this case, most likely it was.
Whilst you are probably right, upvotes are not equal to good content. There is a strong correlation between votes and good content but one is not the cause of the other.
Eg. Facists upvoting facist content. Its not good content, but its got lots of upvotes.
Yeah thats true, but not sure if that happened here. :)
clear
Fastfetch
reset
sudo rm -rf / --no-preserve-root
Time to bring back the
fingercommand?Removed by mod
How to delete bash history
Removed by mod
Is thisa reference to that Tailwind PR?
Full thoughts on my TT
holy shit I did not expect it to go that low
That PR was quite the ride, thank you for that. Also, I feel for the maintainer guy :(
No, this is about adding guidelines for tool-generated submissions to the kernel. The tailwind conversation was on making their documentations more accessible to AI tools.
Linus doesn’t want to add guidelines to not fuel any side of the whole discussion, and says that adding guidelines won’t solve the problem because a lot of times it’s not trivial to detect whether or not a contribution was written with AI tools, after all, “documentation is for good actors”, hinting that anyone contributing AI slop is not expected to respect it anyway.
Linus doesn’t want to add guidelines to not fuel any side of the whole discussion […]
Sounds like “don’t feed the trolls”. And “don’t waste time with discussing spam”.
Apart from that, if GenAI could write good code, it would be acceptable. The thing to do is to scrutinize code for looking plausible while really being bullshit, or subtly wrong.
Thank you for that context. I fear the day we discover something bad about Linus. In my eyes he’s been very based since forever
Your going to be dissapointed then.
He’s very toxic.
That being said, I still love the guy. But he is a known hot head.
That’s pretty mild compared to what I’m afraid of. Of course that it’s not good that he is that way, although I would argue that any kind of bugzilla of an open source project is a toxic environment in itself, but that’s not “rape-slaves in the basement” level kind of stuff
He is almost always right. He just expresses it in a way that hurts people a lot, and thats something he needs to work on. The term toxic is over used but yeah, he was very rude, insensitive, and offensive sometimes.
Damn I’m in the loop on this one for once
It’s a reasonable stance to take given the current climate.
Rm -r /slop
bash: Rm: command not foundUmmmmm… alias Rm rm?
There’s gotta be a thread somewhere of someone asking why rm isn’t working lmao
New life goal, learn coding, create AI kill code, how hard could it be… says me with the learning capability of a potato…
Have you heard of Vibe Coding? /s
Ah just patiently wait for it to kill itself like the nfts.
Worked for the dotcom bubble. It blew up and we were left with corporate hellscape internet, not so/interesting independent internet, and the dark web.
AI will blow up leaving a few massive players, the Google/Facebook/etc equivalent. Some independent people doing interesting and not so interesting things. And a dark web.
Just bully your LLM of choice with a “kill yourself loser” prompt, easy peasy lemon squeezy
It’s really easy. Step one, fire up chatgpt.
sudo duh
./stopslop.shdeleted by creator
Why don’t we just generate documentation with AI?
Documentation will always have to be actually written by the author(s) of the code (or at least someone who understands the code really well), because only the author knows the intent behind a certain function or API endpoint, and that’s what the documentation is for.
LLMs don’t understand shit (sorry AI bros), they will sometimes produce accurate descriptions of the function code as written, but never the intent. Even if the LLM “wrote” the code, it doesn’t “understand” the real intent behind it, because it is just a poor mashup of code taken/stolen from someone else, which statistically fits the prompt.
What LLMs could help with is generating short, human-readable descriptions of what is happening in a given function. This can potentially be helpful for debugging/modifying projects with poor documentation, naming, and function separation, so that instead of gleaning through multiple 2000-line C functions in a 100k SLOC file, you can kind of understand what it does quickly. I’ve used deepseek for this before, with mixed-to-positive results.
But again, this would just be to speed up surface-level digging and not a replacement for actual documentation or good practices.
If you are genuinely asking:
Because documentation should be accurate and comprehensive. LLMs can do neither.
Hell no. Programmers must not just only write code, of course they do have to write the documentation because it is their work and using LLMs only encourages laziness and potentially cause confusion. Why we had extensive business English classes asides from programming in C or Pascal for DOS.
If you’re asking in general and not as a way to feed AI: it writes a ton of text unnecessarily. Ever seen generated PR descriptions? They just basically quote the diff without adding any value
When it gets to the point where it does work to produce usable documentation, without extraneous content, with no mistakes, can be checked quickly, and it is faster to generate + check than to write it, maybe. Assuming a stellar history of being correct from the tool.
As it is right now, once you reach the point where you actually need proper documentation to be written to keep things maintainable, these tools have low accuracy, lots of issues, and using them takes longer than it takes a competent person to just write/update whatever needs to be.
While it might actually be beneficial for certain cases, I think it’s a slippery slope.


















