Guess we can always rely on the good old fashioned ways to make money…
Honestly, I think its pretty awful but im not surprised.
sure AI will replace us all, solve climate chang and cure cancer…but let’s do a little porn on the side because why no? right?
they are desperate for some revenue increase to keep cooking the books a tad longer
I.e. porn.
Leave it to Sam Altman to turn a pervert problem into a pervert opportunity.
And while we’re at it, can we take away the “Open” in “OpenAI” considering they’re a for-profit private enterprise now? These scum don’t deserve to have “Open” in their name
Considering these recent developments, perhaps GapeAI
I have been thinking the same. There is nothing Open about them.
They’re Open for investment money
For a company that was seemingly so close to “AGI”, pivoting to porn is a weird priority.
Tbh it’s less weird than pivoting to weapons-grade uranium
Legend has it that every new technology is first used for something related to sex or pornography. That seems to be the way of humankind.
— Tim Berners-Lee, inventor of the World Wide Web, HTML, URLs, and HTTP.
Now, if you recall that whole hullabaloo where Hollywood was split into schisms, some studios backing Blu-ray Disc, others backing HD DVD. Now, whichever format porno backs is usually the one that becomes the most successful.
- Kevin Sandusky, Burma, ~1960
Ah, yes the fabled Horny Text Markup Language.
They want to create a fuck bot at the end of the day.
I doubt that OpenAI themselves will do so, but I am absolutely confident that someone not only will be banging on this, but I suspect that they probably have already. In fact, IIRC from an earlier discussion, someone already was selling sex dolls with said integration, and I doubt that they were including local parallel compute hardware for it.
kagis
I don’t think that this is the one I remember, but doesn’t really matter; I’m sure that there’s a whole industry working on it.
Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience
The LLM-powered dolls are expected to cost from US$100 to US$200 more than existing versions, which are currently sold between US$1,500 and US$2,000.
WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company’s latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy.
According to the company, it has adopted several open-source LLMs, including Meta Platforms’ Llama AI models, which can be fine-tuned and deployed anywhere.
But it was Facebook that torrented petabytes of porn??
That was all for “personal use”.
Goonerbook.
Yeah, but corpos get a pass on anything and everything.
The rest of us peons don’t
If we let them then yes.
The point was it was Meta, not OpenAI, so why is Altman doing this instead of the Zucc?
Controversial take:
A lot of porn production, from ‘borderline softcore’ like a lot of Instagram, to OnlyFans, to professional filmed hardcore, is some combination of shady/gross/exploitive.
…I will not complain if that industry gets smashed by AI.
As for the ‘corporate control’ aspect, all this stuff is racing towards locally run anyway (since ‘free’ trumps all in the porn market). DrawThings on iOS is already incredible, and that’s a massively unoptimized app in its infancy.
This isn’t refuting your point at all, but if we complain that porn gives young men unrealistic expectations for sex now, wait until they can generate literally anything and that becomes their new normal for their expectations.
(To be fair, they’ve always been able to do this if their imaginations were big enough)
Maybe there’s a crossover point where it becomes fantasy?
I’m playing devil’s advocate here. But I do feel like hyper-reality and animation and robots could be easier to psychologically separate from real like than ‘real’ film or camgirls or whatever. Especially if the curtain is pulled back, and all their knobs are exposed.
At… low points in my life, I’ve used locally run LLMs as sounding boards in lieu of family or whatever, so this is where I’m coming from. Even mentally compromised, all the technical setup/troubleshooting and knobs makes it obvious I’m talking to a tool, not a grounded person. I feel a lot of AI would be healthier if presented that way, including the inevitable pornbots, instead of as the living oracle black boxes Tech Bros (and their apps) like to paint them as.
This is where I’ve been at on it. On top of that, the stuff that’s not directly exploiting human actors (like drawn or animated content) or pushing their boundaries is still coming out of studios that aren’t exactly known for healthy work/life balance. To say nothing of the kind of fetish content that might come out of those places too, which surely takes its own toll on creators.
If we can offload all of that potential trauma onto computers, I’m all for it.
Yeah… I never even thought about how all that commercial video hentai gets made.
I’d generally agree, the industry itself seems to be high risk for exploitation because of the very nature of it.
Realistically though this isn’t going to help that much, most AI generated content that tries to look real ends up quite uncanny. So this is probably more likely to cannibalise some of the least problematic parts of the industry like digital art.
You haven’t seen the newest Flux/Qwen/Wan workflows, much less all the new video models coming out.
Perfect, no, but porn is one of those industries where ‘good enough’ is good enough for most, when combined with convenience. See: potato quality porn sites that still get tons on visitors
It’s just all locked behind ‘enthusiast’ tooling and hardware now, but that won’t last long.
Not even that. You just need a relatively recent GPU if you want to run locally. There are plenty of sites that will give you free credits or charge to run your prompts, no local computation needed.
TBH most ‘average’ people don’t have GPUs or even know what that is, but they do have smartphones.
And they can already run these models pretty well too, albeit suboptimally. That’s my point. But the software needs to catch up.
But the software needs to catch up.
Honestly, there is a lot of potential room for substantial improvements.
-
Gaining the ability to identify edges of the model that are not-particularly-relevant relevant to the current problem and unloading them. That could bring down memory requirements a lot.
-
I don’t think — though I haven’t been following the area — that current models are optimized for being clustered. Hell, the software running them isn’t either. There’s some guy, Jeff Geerling, who was working on clustering Framework Desktops a couple months back, because they’re a relatively-inexpensive way to get a ton of VRAM attached to parallel processing capability. You can have multiple instances of the software active on the hardware, and you can offload different layers to different APUs, but currently, it’s basically running sequentially — no more than one APU is doing compute presently. I’m pretty sure that that’s something that can be eliminated (if it hasn’t already been). Then the problem — which he also discusses — is that you need to move a fair bit of data from APU to APU, so you want high-speed interconnects. Okay, so that’s true, if what you want is to just run very models designed for very expensive, beefy hardware on a lot of clustered, inexpensive hardware…but you could also train models to optimize for this, like use a network of neural nets that have extremely-sparse interconnections between them, and denser connections internal to them. Each APU only runs one neural net.
-
I am sure that we are nowhere near being optimal just for the tasks that we’re currently doing, even using the existing models.
-
It’s probably possible to tie non-neural-net code in to produce very large increases in capability. To make up a simple example, LLMs are, as people have pointed out, not very good at giving answers to arithmetic questions. But…it should be perfectly viable to add a “math unit” that some of the nodes on the neural net interfaces with and train it to make use of that math unit. And suddenly, because you’ve just effectively built a CPU into the thing’s brain, it becomes far better than any human at arithmetic…and potentially at things that makes use of that capability. There are lots of things that we have very good software for today. A human can use software for some of those things, through their fingers and eyes — not a very high rate of data interchange, but we can do it. There are people like Musk’s Neuralink crowd that are trying to build computer-brain interfaces. But we can just build that software directly into the brain of a neural net, have the thing interface with it at the full bandwidth that the brain can operate at. If you build software to do image or audio processing in to help extract information that is likely “more useful” but expensive for a neural net to compute, they might get a whole lot more efficient.
Whooa nellie.
I don’t care to get into that, really all I meant was ‘they need some low level work’
Popular models and tools/augmentations needed to be quantized better and ported from CUDA to MLX/CoreML… that’s it.
That’s all, really.
They’d run many times faster and fit in RAM then, as opposed to the ‘hacked in’ PyTorch frameworks meant for research they run on now. And all Apple needs to do is sick a few engineers on it.
I dunno about Android. That situation is much more complicated, and I’m not sure what the ‘best’ Vulkan runtime to port to is these days.
-
most AI generated content that tries to look real ends up quite uncanny
I think that a lot of people who say this have looked at a combination of material produced by early models and operated by humans who haven’t spent time adapting to any limitations that can’t be addressed on the software side. And, yeah, they had limitations (“generative AI can’t do fingers!”) but those have rapidly been getting ironed out.
I remember posting one of the first images I generated with Flux to a community here, a jaguar lying next to a white cat. This was me just playing around. I wouldn’t have been able to tell you that it wasn’t a photograph. And that was some time back, and I’m not a full-time user, professionally-aimed at trying to make use of the stuff.
kagis
Yeah, here we are.
https://sh.itjust.works/post/27441182
“Cats”
https://lemmy.today/pictrs/image/b97e6455-2c37-4343-bdc4-5907e26b1b5d.png

I could not distinguish between that and a photograph. It doesn’t have the kind of artifacts that I could identify. At the time, I was shocked, because I hadn’t realized that the Flux people had been doing the kind of computer vision processing on their images as part of the training process required to do that kind of lighting work at generation time. That’s using a model that’s over a year old — forever, at the rate things are changing — from a non-expert on just local hardware, and was just a first-pass, not a “generate 100 and pick the best”, or something that had any tweaking involved.
Flux was not especially amenable, as diffusion models go, to the generation of pornography last I looked, but I am quite certain that there will be photography-oriented and real-video oriented models that will be very much aimed at pornography.
And that was done with the limited resources available in the past. There is now a lot of capital going towards advancing the field, and a lot of scale coming.
I mean, that looks AI generated to me. In particular it looks like a ‘smooth skin’ and shiny FLUX image, which is kinda that model’s signature.
It’s not bad though.
It’s like a lot of AI content where if you’re just scrolling past, or not scrutinizing, it looks real enough. I’m sure soon it will take lots of scrutiny to distinguish.
Except these AI models need data to train on, they cannot improve without an industry to leach off of.
As if we didn’t already have more than enough pornographic material on all the hard drives worldwide for training. There’s nothing new to come in the image material from this industry, porn is infinite repetitions.
While I don’t disagree with your overall point, I would point out that a lot of that material has been lossily-compressed to a degree that significantly-degrades quality. That doesn’t make it unusable for training, but it does introduce a real complication, since your first task has to be being able to deal with compression artifacts in the content. Not to mention any post-processing, editing, and so forth.
One thing I’ve mentioned here — it was half tongue-in-cheek — is that it might be less-costly than trying to work only from that training corpus, to hire actors specifically to generate video to train an AI for any weak points you need. That lets you get raw, uncompressed data using high-fidelity instruments in an environment with controlled lighting, and you can do stuff like use LIDAR or multiple cameras to make reducing the scene to a 3D model simpler and more-reliable. The existing image and video generation models that people are running around with have a “2D mental model” of the world. Trying to bridge the gap towards having a 3D model is going to be another jump that will have to come to solve a lot of problems. The less hassle there is with having to deal with compression artifacts and such in getting to 3D models, probably the better.
There’s loads of hi-res ultra HD 4k porn available. If someone professional wants to train on that it’s not hard to find. If someone wants to play a leading role in the field of AI training, then of course they invest the necessary money and don’t use the shady material from the peer-to-peer network.
There’s loads of hi-res ultra HD 4k porn available.
It’s still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn’t impact you if you’re viewing the content without transformation, but it does become a factor if you don’t. Like, you’re viewing something in a reduced colorspace with blocks and color shifts and stuff.
I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn’t have some kind of preprocessing to deal with it.
I’m not saying that it’s technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.
It’s easy to get rid of that with prefiltering/culling and some preprocessing. I like BM3D+deblock, but you could even run them though light GAN or diffusion passes.
A lot of the amateur lora makers aren’t careful about that, but I’d hope someone shelling out for a major fine tune would.
Also “minor” compression from high quality material isn’t so bad, especially if starting with a pre trained model. A light denoising step will mix it into nothing.
Lol, this too.
It’s honestly too much already.
Except these AI models need data to train on, they cannot improve without an industry to leach off of.
Not anymore.
The new trend in ML is training on synthetic data, alongside more refined sets of curated data.
And, honeslty the open base models we have now are ‘good enough’ with some finetuning, and maybe QAT.
Ah sweet model collapse.
That’s certainly something I’ve observed myself training GANs on their own output. It’s definitely a problem for the stupid (like Tech Bros).
But it doesn’t happen like you think, as long as the augmentations are clever, and their scope is narrow. Hence the success of several recent distillations and ‘augmented’ LLMs, and the failure of huge dataset trains like Llama4.
…And synthetic data generation/augmentation is getting clever, and is already being used in newer trains. See this, or newer papers if your search for them on arixv: https://github.com/qychen2001/Awesome-Synthetic-Data
Or Nvidia’s HUGE focus on this, combining it with their work in computer graphics: https://www.nvidia.com/en-us/use-cases/synthetic-data-physical-ai/
As for the ‘corporate control’ aspect, all this stuff is racing towards locally run anyway (since it’s free).
I am not at all sure about that. I use an XT 7900 XTX and a Framework Desktop with an AI Max 395+, both of which I got to run LLMs and diffusion models locally, so I’ve no certainly no personal aversion to local compute.
But there are a number of factors pulling in different directions. I am very far from certain that the end game here is local compute.
In favor of local
-
Privacy.
-
Information security. It’s not that there aren’t attacks that can be performed using just distribution of static models (If Anyone Builds It, We All Die has some interesting theoretical attacks along those lines), but if you’re running important things at an institution that depend on some big, outside service, you’re creating creating attack vectors into your company’s systems. Not to mention that even if you trust the AI provider and whatever government has access to their servers, you may not trust them to be able to keep attackers out of their infrastructure. True, this also applies to many other cloud-based services, but there are a number of places that run services internally for exactly this reason.
-
No network dependency for operation, in terms of uptime. Especially for things like, say, voice recognition for places with intermittent connection, this is important.
-
Good latency. And no bandwidth restrictions. Though a lot of uses today really are not very sensitive to either.
-
For some locales, regulatory restrictions. Let’s say that one is generating erotica with generative AI stuff, which is a popular application. The Brits just made portraying strangulation in pornography illegal. I suspect that if random cloud service is permitting for generation of erotic material involving strangulation, they’re probably open to trouble. Random Brit person who is running a model locally may well not be in compliance with the law (I don’t recall if it’s just commercial provision or not) but in practical terms, it’s probably not particularly enforceable. That may be a very substantial factor based on where someone lives. And the Brits are far from the most-severe. Iranian law, for example, permits execution for producing pornography involving homosexuality.
In favor of cloud
-
Power usage. This is, in 2025, very substantial. A lot of people have phones or laptops that run off batteries of limited size. Current parallel compute hardware to run powerful models at a useful rate can be pretty power hungry. My XT 7900 XTX can pull 355 watts. That’s wildly outside the power budget of portable devices. An Nvidia H100 is 700W, and there are systems that use a bunch of those. Even if you need to spend some power to transfer data, it’s massively outweighed by getting the parallel compute off the battery. My guess is that even if people shift some compute to be local (e.g. offline speech recognition) it may be very common for people with smartphones to use a lot of software that talks to remote servers for a lot of heavy-duty parallel compute.
-
Cooling. Even if you have a laptop plugged into wall power, you need to dissipate the heat. You can maybe use eGPU accelerators for laptops — I kind of suspect that eGPUs might see some degree of resurgence for this specific market, if they haven’t already — but even then, it’s noisy.
-
Proprietary models. If proprietary models wind up dominating, which I think is a very real possibility, AI service providers have a very strong incentive to keep their models private, and one way to do that is to not distribute the model.
-
Expensive hardware. Right now, a lot of the hardware is really expensive. It looks like an H100 runs maybe $30k at the moment, maybe $45k. A lot of the applications are “bursty” — you need to have access to an H100, but you don’t need sustained access that will keep that expensive hardware active. As long as the costs and applications look like that, there’s a very strong incentive to time-share hardware, to buy a pool of them and share them among users. If I’m using my hardware 1% of the time, I only need to pay something like 1% as much if I’m willing to use shared hardware. We used to do this back when all computers were expensive, had dumb terminal and teletypes that connected to “real” computers that ran with multiple users sharing access to hardware. That could very much again become the norm. It’s true that I expect that hardware capable of a given level of parallel compute will probably tend to come down (though there’s a lot of unfilled demand to meet). And it’s true that the software can probably be made more hardware-efficient than it is today. Those argue for costs coming down. But it’s also true that the software guys probably can produce better output and more-interesting applications if they get more-powerful hardware to play with, and that argues for upwards pressure.
-
National security restrictions. One possible world we wind up in is where large parallel compute systems are restricted, because it’s too dangerous to permit people to be running around with artificial superintelligences. In the Yudkowsky book I link to above, for example, the authors want international law to entirely prohibit beefy parallel-compute capability to be available to pretty much anyone, due to the risks of artificial superintelligence, and I’m pretty sure that there are also people who just want physical access to parallel compute restricted, which would be a lot easier if the only people who could get the hardware were regulated datacenters. I am not at all sure that this will actually happen, but there are people who have real security concerns here, and it might be that that position will become a consensus one in the future. Note that I think that we may already be “across the line” here with existing hardware if parallel compute can be sharded to a sufficient degree, across many smaller systems — your Bitcoin mining datacenter running racks of Nvidia 3090s might already be enough, if you can design a superintelligence that can run on it.
I disagree with many points, positive and negative.
-
…Honestly, the average person does not care about privacy, security, nor being offline. They have shown they will gladly trade all that away for cheap convenience, repeatedly.
-
Nor do they care about power usage or cooling. They generally do not understand thermodynamics, and your 395 (much less an iPhone GPU) would be a rounding error in their bill.
I’m not trying to disparage folks here, but that’s how they are. We’re talking ‘average people.’
-
As for proprietary models, even if we don’t get a single new open source release, not one, the models we have right now (with a little finetuning/continue training) are good enough for tons of porn.
-
On hardware, I’m talking smartphones. And only smartphones. They’re powerful enough already, they just need a lot of software work and a bit more RAM, but everyone already has one.
-
Regulatory restrictions at either end are quite interesting, and I’m honestly not sure how it will pan out. Though I’m skeptical of any ‘superintelligence’ danger from commodity hardware, as the current software architectures are just not leading to that.
What I am getting at is the ‘race to the bottom.’
Fact is, running SDXL or Qwen class models on your iPhone is reasonably fast (results in a few seconds on my base iPhone 16, again highly unoptimized). And it’s free.
Not free with a catch, free.
That kind of trumps all other concerns, once the barriers come down. If it’s free, people (and cheap porn middlemen) will find a way. Hence the popularity of free porn now, no matter how crap it is or how much its restricted.
So, I’m just talking about whether-or-not the end game is going to be local or remote compute. I’m not saying that one can’t generate pornography locally, but asking whether people will do that, whether the norm will be to run generative AI software locally (the “personal computer” model that came to the fore in the mid-late 1970s and on or so) or remotely (the “mainframe” model, which mostly preceded it).
Yes, one can generate pornography locally…but what if the choice is between a low-resolution, static SDXL (well, or derived model) image or a service that leverages compute to get better images or something like real-time voice synth, recognition, dialogue, and video? I mean, people can get static pornography now in essentially unbounded quantities on the Internet; It is in immense quantity; if someone spent their entire lives going through it, they’d never, ever see even a tiny fraction of it. Much of it is of considerably greater fidelity than any material that would have been available in, say, the 1980s; certainly true for video. Yet…even in this environment of great abundance, there are people subscribing to commercial (traditional) pornography services, and getting hardware and services to leverage generative AI, even though there are barriers in time, money, and technical expertise to do so.
And I’d go even further, outside of erotica, and say that people do this for all manner of things. I was really impressed with Wolfenstein 3D when it came out. Yet…people today purchase far more powerful hardware to run 3D video games. You can go and get a computer that’s being thrown out that can probably run dozens of simultaneous instances of Wolfenstein 3D concurrently…but virtually nobody does so, because there’s demand for the new entertainment material that the new software and hardware permits for.
Fair.
I guess it depends what becomes the ‘norm.’ Thing about any GenAI service is it’s not free. It’s not even ‘cheap’ like streaming video or images en masse is; every generation is tailored and costs cloud money.
And Apple seems rather hell bent on pushing beefy hardware to basically half of all people in the western world. A few generations of iPhone and software can absolutely get that confluence of ‘real-time voice synth, recognition, dialogue, and video’ like you describe, and there are fundamental cloud issues going beyond that: game-like, local pornagraphy ‘virtual reality’ is much better done (at least partially) on device, because of how hard game streaming is.
…But you still make good points. It could go either way.
I mean, people can get static pornography now in essentially unbounded quantities on the Internet; It is in immense quantity; if someone spent their entire lives going through it, they’d never, ever see even a tiny fraction of it.
On this specifically, TONS of people have no idea how to use a browser. Dare I say most, these days? Their whole internet is what’s availible through algorithmic recommendations on app stores, hence this ‘sea’ of static pornography might be more limited than you’d think.
There’s also, apparently, a huge demand for basic interactivity, hence the unreasonable popularity of OnlyFans. And OF-type interactivity is quite crap if you ask me.
-
-
Just a few months back Sammy was dunking on Musk for allowing porn on Grok. What changed now?
They realized how much revenue they were leaving on the table
Investors rang up
They realized that realistically, the main use of AI for creative work is to make niche fetish stuff, otherwise human produced equivalent is always superior.
Is it voyeur for OpenAI employees to read a person’s sexting chats with their AI? 🤔
Sam Altman is pretty awful, modern AI is pretty awful, but what specifically is pretty awful about this?
I guess I dont like the idea of AI getting popular enough to pay for because it can generate porn.
I was hoping for more, I guess. :)
But sure, i get it. I watch porn too. And beautiful women are exciting. On a spiritual level though, it feels super primitive to sit and and jerk off to images or video.
Also it definently damages people. We just cant stop.
Honesty good for them. Companies that ban mature content are the worst.
On one hand, more power to the people. On the other hand, ChatGPT is being operated by a company ran by Sam Altman.
In one hand…
So desperate to make money, they are going to prostitute their child. Pardon me while I go throw up.
deleted by creator


















