Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
Did we create a mental health problem in an AI? That doesn’t seem good.
One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there
Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?
Why are you talking about it like it’s a person?
Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.
It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.
Yup. But people will still project one on to it, because that’s how humans work.
Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity
Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.
Let’s hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.