But in the post warning users that the company will call the authorities if they seem like they’re going to hurt someone, OpenAI also acknowledged that it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.”
Consider how the US handles those cases, that may actually be a broken-clock good thing. If they sent the cops to a suicidal person’s house said cops would probably kill them themselves.
Nah, not for suicide:
Consider how the US handles those cases, that may actually be a broken-clock good thing. If they sent the cops to a suicidal person’s house said cops would probably kill them themselves.
Oh thank god I was afraid some more kids might not get talked into suicide by a fucking server