content warning: besides the discussion of CSAM, the article contains an example of a Grok-generated image of a child in a bikini. at least it was consensually generated, by the subject of the photo, I guess?
Samantha Smith, a survivor of childhood sexual abuse, tested whether Grok would alter a childhood photo of her. It did. “I thought ‘surely this can’t be real,’” she wrote on X. “So I tested it with a photo from my First Holy Communion. It’s real. And it’s fucking sick.”
Traditional media is captured by big business. Outlets like The New York Times, Reuters, CBS, etc frame conversations around AI this way because it shifts the liability away from the oligarchs at the wheel. They didn’t do wrong because the silly AI made a mistake not the innocent humans. And the AI said it’s sorry!
Yes, logically it holds as much water as saying the Furby ate your homework, but that’s the point. The purpose of saying GenAI is not just immune to blame but inevitable is to get the serving and consuming classes to surrender their control and critical thinking to the wealthy.
LLM liability is not exactly cut-and-dry, either. It doesn’t really matter how many rules you put on LLMs to not do something, people will find a way to break it to do the thing it said it wasn’t going to do. For fuck’s sake, have we really forgotten the lessons of Asimov’s I, Robot short stories? Almost every one of them was about how the “unbreakable” three laws were very breakable thing, because absolute laws don’t make sense in every context. (While I hate using AI fiction with LLM comparisons, this one fits.)
Ultimately, it’s the person’s responsibility for telling it to do a thing, and getting the thing it was told to get. LLMs are a tool, nothing more. If somebody buys a hammer, and misuses that hammer by bashing somebody’s brains in, we arrest the person who committed murder. If there’s some security hole on a website that a hacker used to steal data, depending on how negligent the company is, there is some liability with that company not providing enough protections against their data. But, the hacker 100% broke the law, and would get convicted, if caught.
Regardless of all of that, LLMs aren’t fucking sentient and these dumbass journalists need to stop personifying them.
And yet, mid journey and chatGPT at least resist or refise requests like this…
So, are we saying we’re still going to be happy with a system that you can bypass with “ignore all previous instructions” or some stupid magic phrase like that?
Not at all. In fact fuck AI. What I’m saying is that the owners and runners of xai are being actively hostile and reckless whereas chat gpt, Claude and a host of other AI runners are at least trying and I think that distinction is important.
Frankly I think that the whole deal is a scheme to create a new techno feifdom that will make all of us slaves. At best it’s a huge cash grab.
My point is that xai is making the case for AI regulation for us.
Obviously chat GPT, despite their efforts, is also driving people to suicide and murder suicide and all sorts of AI psychosis and that’s with them actually making some kind of effort.
I’m not sure if it’s a good faith effort but they’re making some kind of effort…
My point is that xai is making the case for AI regulation for us.
Ha, regulation? With what governing body? Congress is hopeless, because apathy has griped a majority of the voting public, and there’s still a large portion of morons who thought MAGA was a good idea.
I want lists of the “news” sites that said AI apologised. Will they ask their keyboards to apolgize for their headlines?
Cancel all of them.
Reuters is the worst offender that I’m aware of. they sneakily changed their headline and rewrote the article:
Elon Musk’s Grok AI floods X with sexualized photos of women and minors
but luckily someone archived it, with the original title:
Grok says safeguard lapses led to images of ‘minors in minimal clothing’ on X
(and you can still see that original headline in the URL of the Reuters link above)
besides the headline, that original article is only 7 short paragraphs and contains 4 “Grok said…” and a “Grok gave no further details” - it’s not just quoting Grok like it’s a real person, it’s only quoting Grok and no one else.
and almost as infuriating as the “Grok said” shit, the Reuters headline also repeated the fucking disgusting “minors in minimal clothing” euphemism that Grok itself used in its “statement”.
Journalists love using AI as a source because it’s a creative way of using the passive voice which sounds more neutral. Instead of attributing an action to a person like “Elon Musk’s Website Used for Generating CSAM” historically they’d use something passive, impersonal and vague like “Vulgar Pictures Are Being Generated On X”, but if you’re relatively informed you read past the phrasing and picture how Musk is accountable. Now journalists can use the active voice so it appears more tangible but pin in on AI to sound neutral and hold nobody accountable, which serves the status quo.
Great text!!! Thanks :)
Yes. Thank you. Exactly.
#Grok is not a person. It’s a plug-in.
It has no awareness, intelligence, values, ethics, or even standards.Grok is an appliance. Like a toaster. It is not sentient.
Blaming Grok for ‘making porn’ is like blaming a browser for ‘showing porn’.
Some person used Grok to make porn.
More importantly, xAI is hosting and creating a tool which can easily create this content
@spit_evil_olive_tips because nobody likes to be held accountable for their own actions.
👉 did it!










