

No shit. There’s easier ways to open the fridge.
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast


No shit. There’s easier ways to open the fridge.


unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers”
Yes. Yes I do. That’s exactly what code is: instructions. That’s literally how computers work. That’s what people like me (software developers) do when we write software: We’re writing down instructions.
When you click or move your mouse, you’re giving the computer instructions (well, the driver is). When you type a key, that’s resulting in an instruction being executed (dozens to thousands, actually).
When I click “submit” on this comment, I’m giving a whole bunch of computers some instructions.
Insert meme of, “you mean computers are just running instructions?” “Always have been.”


In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn’t actually produce any evidence of infringement beyond, “Look! This passage is similar.” They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.
In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn’t meet the legal definition of infringement.
Basically, the courts ruled in both cases, “AI models are not just lossy/lousy compression.”
IMHO: What we really need a ruling on is, “who is responsible?” When an AI model does output something that violate someone’s copyright, is it the owner/creator of the model that’s at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as “distribution” under the law? I mean, I don’t think it does because to me that’s just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they’re violating copyright.
Even then, there’s differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.
You know what? Copyright law is way too fucking complicated, LOL!


Hmmm… That’s all an interesting argument but it has nothing to do with my comparison to YouTube/Netflix (or any other kind of video) streaming.
If we were to compare a heavy user of ChatGPT to a teenager that spends a lot of time streaming videos, the ChatGPT side of the equation wouldn’t even amount to 1% of the power/water used by streaming. In fact, if you add up all the usage of all the popular AI services power/water usage that still doesn’t add up to much compared to video streaming.


Sell? Only “big AI” is selling it. Generative AI has infinite uses beyond ChatGPT, Claude, Gemini, etc.
Most genrative AI research/improvement is academic in nature and it’s being developed by a bunch of poor college students trying to earn graduate degrees. The discoveries of those people are being used by big AI to improve their services.
You seem to be making some argument from the standpoint that “AI” == “big AI” but this is not the case. Research and improvements will continue regardless of whether or not ChatGPT, Claude, etc continue to exist. Especially image AI where free, open source models are superior to the commercial products.


but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.
No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.
What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There’s a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)… Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It’s exactly the type of thing they don’t want to happen!
The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck. This is false. It’s flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!
Here’s the part I don’t get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?
They shouldn’t! The only reason why articles like this get any attention at all is because it’s rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don’t get it. If you don’t like it, just say you don’t like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?
Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.
Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we’ve got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.
Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!
Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it’s true). So if you’re about to say it’s bad for the environment, I expect you’re just as vocal about streaming video, yeah?


Correction: Newer versions of ChatGPT (GPT-5.x) are failing in insidious ways. The article has no mention of the other popular services or the dozens of open source coding assist AI models (e.g. Qwen, gpt-oss, etc).
The open source stuff is amazing and gets better just as quickly as the big AI options. Yet they’re boring so they don’t make the news.


Well, the CSAM stuff is unforgivable but I seriously doubt even the soulless demon that is Elon Musk wants his AI tool generating that. I’m sure they’re working on it (it’s actually a hard computer science sort of problem because the tool is supposed to generate what the user asks for and there’s always going to be an infinite number of ways to trick it since LLMs aren’t actually intelligent).
Porn itself is not illegal.


I don’t know, man… Have you even seen Amber? It might be worth an alert 🤷


I don’t know how to tell you this but… Every body gives a shit. We’re born shitters.


The real problem here is that Xitter isn’t supposed to be a porn site (even though it’s hosted loads of porn since before Musk bought it). They basically deeply integrated a porn generator into their very publicly-accessible “short text posts” website. Anyone can ask it to generate porn inside of any post and it’ll happily do so.
It’s like showing up at Walmart and seeing everyone naked (and many fucking), all over the store. That’s not why you’re there (though: Why TF are you still using that shithole of a site‽).
The solution is simple: Everyone everywhere needs to classify Xitter as a porn site. It’ll get blocked by businesses and schools and the world will be a better place.


“To solve this puzzle, you have to get your dog to poop in the circle…”


Yep. Stadia also had a feature like this (that no one ever used).
Just another example of why software patents should not exist.


Maybe take pictures of them instead?


There’s going to be some hilarious memes/videos when these get deployed:


Like I said initially, how do we legally define “cloning”? I don’t think it’s possible to write a law that prevents it without also creating vastly more unintended consequences (and problems).
Let’s take a step back for a moment to think about a more fundamental question: Do people even have the right to NOT have their voice cloned? To me, that is impersonation; which is perfectly legal (in the US). As long as you don’t make claims that it’s the actual person. That is, if you impersonate someone, you can’t claim it’s actually that person. Because that would be fraud.
In the US—as far as I know—it’s perfectly legal to clone someone’s voice and use it however TF you want. What you can’t do is claim that it’s actually that person because that would be akin to a false endorsement.
Realistically—from what I know about human voices—this is probably fine. Voice clones aren’t that good. The most effective method is to clone a voice and use it in a voice changer, using a voice actor that can mimick the original person’s accent and inflection. But even that has flaws that a trained ear will pick up.
Ethically speaking, there’s really nothing wrong with cloning a voice. Because—from an ethics standpoint—it is N/A: There’s no impact. It’s meaningless; just a different way of speaking or singing.
It feels like it might be bad to sing a song using something like Taylor Swift’s voice but in reality it’ll have no impact on her or her music-related business.


wouldn’t we expect countries with strong social programs like Norway to have much higher birth rates? I suppose those social programs would tend to correlate with birth control
I was unfamiliar with Norway’s program so I looked it up…
49 weeks of maternity leave? FUCK YEAH!
$160/month (USD equivalent) for kids under 6? Not nearly enough! That is of negligibe impact and doesn’t come close to offsetting the costs of raising a child.
My two takeaways from this, learning about Norway’s programs:
Also, “when everyone gets a subsidy, no one gets a subsidy” (my own saying). It seems inevitable that daycare costs would increase by the subsidy amount in order to capture it as profit. Basically, long-term subsidies like that ultimately fail because of basic economics. They can work fine in the short term, though.
I still stand by what I said: Having kids makes you less economically stable and until we fix that, fertility rates will continue to decline.
Seems like the biggest thing that needs to be fixed though is housing costs.


Pollution would make sense if people were trying to have kids but couldn’t. But they’re not trying to have kids at all!
The more likely explanation—related to tech—is that we don’t need kids anymore. For 99% of human history, children were necessary and not having kids was basically impossible (horny kids and no birth control). Kids were how humans kept alive/stable as well as expanded their power and influence! It’s also how they got cared for in old age (though that’s a much lesser concern because I seriously doubt humans of the past thought that hard about such things when living to 40 was considered amazing).
Now we have birth control and—in Western societies—stability/safety is much more likely if you don’t have kids. We’ve basically flipped the script on our evolution.
You want people to have kids? Flip the script back! Make anyone under 30 without kids pay a massive tax that pays for the kids of people who have them! Basically, make everyone who didn’t have kids pay child support.
Make having kids the best damned economic decision anyone can make with diminishing returns after two (kids).


You make AI voice generation sound like it’s a one-step process, “clone voice X.” While you can do that, here’s where it’s heading in reality:
“Generate a voice that’s sounds like a male version of Scarlett Johansson”.
“That sounds good, but I want it to sound smoother.”
“Ooh that’s close! Make it slightly higher pitch.”
In a process like that, do you think Scarlett Johansson would have legal standing to sue?
What if you started with cloning your own voice but after many tweaks the end result ends up sounding similar to Taylor Swift? Does she have standing?
In court, you’d have expert witnesses saying they don’t sound the same. “They don’t even have the same inflection or accent!” You’d have voice analysis experts saying their voice patterns don’t match. Not even a little bit.
But about half the jury would be like, “yeah, that does sound similar.” And you could convict a completely innocent person.
Every modern monitor has some memory in it. They have timing controllers and image processing chips that need DRAM to function. Not much, but it is standard DDR3/DDR4 or LPDDR RAM.