Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there

      • Aggravationstation@feddit.uk
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

      • Mediocre_Bard@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

        • buttnugget@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity

    • Azal@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.