

They don’t give a fuck if murderers and armed robbers get away with their shit.
They care if murderers and robbers get away with their shit, they don’t care if murderers and robbers get away with your shit. Important distinction.


They don’t give a fuck if murderers and armed robbers get away with their shit.
They care if murderers and robbers get away with their shit, they don’t care if murderers and robbers get away with your shit. Important distinction.
Again, read the rest of the comment. Wikipedia very much repeats the views of reliable sources on notable topics - most of the fuckery is in deciding what counts as “reliable” and “notable”.
that he just wants a propaganda bot that regurgitates all of the right wing talking points.
Then he has utterly failed with Grok. One of my new favorite pastimes is watching right wingers get angry that Grok won’t support their most obviously counterfactual bullshit and then proceed to try to argue it into saying something they can declare a win from.
More like 0.7056 IQ move.
Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.
Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.


I’ve noticed a lot of videos give me a still ad and make me click “skip” at the very start of videos through my ad blockers.


A lot of writing code is relatively standard patterns and variations on them. For most but the really interesting parts, you could probably write a sufficiently detailed description and get an LLM to produce functional code that does the thing.
Basically for a bunch of common structures and use cases, the logic already exists and is well known and replicated by enough people in enough places in enough languages that an LLM can replicate it well enough, like literally anyone else who has ever written anything in that language.


So they are both masters of troll chess then?
See: King of the Bridge


If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.
Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.
But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
If you’re sufficiently detached from reality, nearly anything validates the psychosis.
If a theory and every attempt at real world application of a theory yield wildly different results, shouldn’t that suggest something in the theory is deeply flawed?
Really it’s actually capitalism that supposes people are too dumb to make their own choices or know how a business is run, and thus shouldn’t have say over company choices.
Really it’s actually that businesses with that structure tend to perform better in a market economy, because no one forces businesses to be started as “dictatorships run by bosses that effectively have unilateral control over all choices of the company” other than the people starting that business themselves. You can literally start a business organized as a co-op (which by your definitions is fundamentally a socialist or communist entity) - there’s nothing preventing that from being the organizing structure. The complaint instead tends to be that no one is forcing existing successful businesses to change their structure and that a new co-op has to compete in a market where non-co-op businesses also operate.
If co-ops were a generally more effective model, you’d expect them to be more numerous and more influential. And they do alright for themselves in some spaces. For example in the US many of the biggest co-ops are agricultural.


To be clear, when you say “seeded from” you mean an image that was analyzed as part of building the image classifying statistical model that is then essentially running reverse to produce images, yes?
And you are arguing that every image analyzed to calculate the weights on that model is in a meaningful way contained in every image it generated?
I’m trying to nail down exactly what you mean when you say “seeded by.”


OK, so this is just the general anti-AI image generation argument where you believe any image generated is in some meaningful way a copy of every image analyzed to produce the statistical model that eventually generated it?
I’m surprised you’re going the CSAM route with this and not just arguing that any AI generated sexually explicit image of a woman is nonconsensual porn of literally every woman who has ever posted a photo on social media.


was seeded with the face of a 15yr old and that they really are 15 for all intents and purposes.
That’s…not how AI image generation works? AI image generation isn’t just building a collage from random images in a database - the model doesn’t have a database of images within it at all - it just has a bunch of statistical weightings and net configuration that are essentially a statistical model for classifying images, being told to produce whatever inputs maximize an output resembling the prompt, starting from a seed. It’s not “seeded with an image of a 15 year old”, it’s seeded with white noise and basically asked to show how that white noise looks like (in this case) “woman porn miniskirt”, then repeat a few times until the resulting image is stable.
Unless you’re arguing that somewhere in the millions of images tagged “woman” being analyzed to build that statistical model is probably at least one person under 18, and that any image of “woman” generated by such a model is necessarily underage because the weightings were impacted however slightly by that image or images, in which case you could also argue that all drawn images of humans are underage because whoever drew it has probably seen a child at some point and therefore everything they draw is tainted by having been exposed to children ever.


97% of the internet has no idea what Matrix channels even are.
I’ve been able to explain it to people pretty easily as “like Discord, but without Discord administration getting to control what’s allowed, only whoever happens to run that particular server.”


A more apt comparison would be people who go out of their way to hurt animals.
Is it? That person is going out of their way to do actual violence. It feels like arguing someone watching a slasher movie is more likely to make them go commit murder is a much closer analogy to someone watching a cartoon of a child engaged in sexual activity or w/e being more likely to make them molest a real kid.
We could make it a video game about molesting kids and Postal or Hatred as our points of comparison if it would help. I’m sure someone somewhere has made such a game, and I’m absolutely sure you’d consider COD for “fun and escapism” and someone playing that sort of game is doing so “in bad faith” despite both playing a simulation of something that is definitely illegal and the core of the argument being that one causes the person to want to the illegal thing more and the other does not.


…and most of the people who agree with that notion would also consider reading Lemmy to be “trawling dark waters” because it’s not a major site run by a massive corporation actively working to maintain advertiser friendliness to maximize profits. Hell, Matrix is practically Lemmy-adjacent in terms of the tech.


eventually they want to move on to the real thing, as porn is not satisfying them anymore.
Isn’t this basically the same argument as arguing violent media creates killers?


They have to deal with old men masturbating to them getting raped online.
The moment it was posted to wherever they were going to have to deal with that forever. It’s not like they can ever know for certain that every copy of it ever made has been deleted.
Meh, no process is perfect and sensitivity and specificity are often enemies. Basically, in a lot of cases the more sensitive you make a test to detect something, the more likely it is to accidentally catch false positives.
Sounds like they’ve vastly improved it’s ability to detect, hopefully that didn’t come with false detections for people running unusual hardware or software combinations.