content warning: besides the discussion of CSAM, the article contains an example of a Grok-generated image of a child in a bikini. at least it was consensually generated, by the subject of the photo, I guess?
Samantha Smith, a survivor of childhood sexual abuse, tested whether Grok would alter a childhood photo of her. It did. “I thought ‘surely this can’t be real,’” she wrote on X. “So I tested it with a photo from my First Holy Communion. It’s real. And it’s fucking sick.”



Journalists love using AI as a source because it’s a creative way of using the passive voice which sounds more neutral. Instead of attributing an action to a person like “Elon Musk’s Website Used for Generating CSAM” historically they’d use something passive, impersonal and vague like “Vulgar Pictures Are Being Generated On X”, but if you’re relatively informed you read past the phrasing and picture how Musk is accountable. Now journalists can use the active voice so it appears more tangible but pin in on AI to sound neutral and hold nobody accountable, which serves the status quo.