Look as long as your a NATO nation, we’re a perfectly peaceful and reasonable super power with a military that would scorch the earth to ash within 24 hours.
Look as long as your a NATO nation, we’re a perfectly peaceful and reasonable super power with a military that would scorch the earth to ash within 24 hours.
Well it means they need some ability to reject some content, which means they need a level of transparency they would never want otherwise.
It’s more ''we are so focused on stealing and eating content, we’re accidently eating the content we or other AI made, which is basically like incest for AI, and they’re all inbred to the point they don’t even know people have more than two thumb shaped fingers anymore."
So yes. That’s what’s happening.
The inciting thought of most criminal acts is ‘‘they’ll never catch me’’. Which if you’re as lucky as me, you’ll know you’ll get caught everytime, and they’ll make an example of you. It’s kept my nose clean a long time.
If you read a lot of news, it’s really clear Tor isn’t protecting anyone from the FBI. It’s about as effective as using limewire at this point. Which also, the reporting makes it pretty clear it’s not effective to hide criminal acts in the least. But it’s pretty great abusers think it’s effective so they get caught.
The current method is auto deleting nsfw images. Doesn’t matter how you got there, it detects nsfw it dumps it, you never get an image. Besides that gating nsfw content generation behind a pay wall or ID wall. It would stop a lot of teenagers. Not all, but it would put a dent in it. There are also AI models that will allow some nsfw if it’s clearly in an artistic style, like a watercolor painting, but will kick nsfw realism or photography, rendered images, that sort of thing. These are usually both in the prompt mode, paint in/out, and image reference mode, generation of likely nsfw images, and after generating a nsfw check before delivering the image. AI services are antisipating full on legal consequences for allowing any nsfw or any realistic, photographic, cgi, image of a living person without their consent, it’s easy to see that’s what they are prepared for.
If you really must, you can simply have the AI auto delete nsfw images, several already do this. Now to argue you can’t simply never generate or give out nsfw images, you can also gate nsfw content generation behind any number of hinderences that are highly effective against anonymous use, or underage use.
That’s all well and good to remove them, but it solves nothing. At this point every easily accessible AI I’m aware of is kicking back any prompts with the names of real life people, they’re already antisipating real laws, preventing the images from being made in the first place isn’t impossible.
There’s big client companies that have already cut them off, and they won’t bother coming back for the same price and unsubstantiated changes, Adobe is changing this now because they are now bleeding.
Even before that they have been accused of not buying stocks ordered by users, then buying at sell order and waiting for the price to raise to sell so they get a profit. It’s been questioned a long time.