From the maybe-we-should-have-done-that-to-start dept:

The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

  • Eben@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    12 hours ago

    Why they ban users instead of fixing their stupid AI. And so what about 18+ users ?

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    I really don’t like the “everything is behind an age gate” internet we’re heading into.

    Yes, this case is tragic, but we need better support for people with mental health issues.

  • DoGeeseSeeGod@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    6
    ·
    23 hours ago

    Its probably the legal battle is most of it. But I gotta wonder how much of decision was based that recent headline of Grok AI asking a 12 year old to send nudes. Accidentally creating something that sometimes attempts to make CASM is not a good look and the lawsuits if a kid actually did it.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”.

    I wonder if the death is really connected to the Ai. It could be the person was already in a problematic situation with family and friends, and they just need to blame someone or something and don’t want to admit the real problems. Kind of what often happened back in the day with videogames getting blamed for killing humans.

    Now, I don’t know if this is what is happening, but it is good to have the mind open. Because we could end up in a society where everyone undermines real problems in physical world and blames Ai to sideload the question. Nontheless I think the outcome that the companies restrict access to adult only is a good thing. But the entire industry needs to do this, not select services AFTER deaths. Because if this is the root cause of death, then why should others be allowed to operate? This needs regulation! I am not for regulating every detail, but IF this is causing kids to die, then it NEEDS regulation worldwide.

    • Gaywallet (they/it)@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      20 hours ago

      It could be the person was already in a problematic situation with family and friends, and they just need to blame someone or something and don’t want to admit the real problems. Kind of what often happened back in the day with videogames getting blamed for killing humans.

      This is not a fair analogy for what is going on here. Video games being blamed harkens back to times when music or other counter cultural media was blamed for behavior. We have a lot of literature which shows that the passive consumption of media doesn’t really affect someone in the ways which they were being blamed. From the beginning, this argument lacked a logical or hypothetical framework as well - it was entirely based on moral judgement values by certain individuals in society who simply “believed” that these were the cause.

      AI on the other hand, interacts back with you, and amplifies psychosis. Now this is early days and most of what we have is theoretical in nature, based off case-studies, or simply clinical hypothesis [1, 2, 3]. However, there is a clear difference in media itself - the chatbot is able to interact with the user in a dynamic way, and is programmed in a manner by which to reinforce certain thoughts and feelings. The chatbot is also human-seeming enough for a person to anthropomorphize the chatbot and treat it like an individual for the purposes of therapy or an attempt at emotional closeness. While video games do involve human interaction and a piece of media could be designed to be psychologically difficult to deal with, that would be hyper-specific to the media and not the medium as a whole. The issues with chatbots (the LLM subset of AI) is pervasive across all chatbots because of how they are designed and the populace they are serving.

      we could end up in a society where everyone undermines real problems in physical world and blames Ai to sideload the question

      This is a valid point to bring up, however, I think it is shortsighted when we think in a broader context such as that of public health. We could say the same about addictive behaviors and personalities, for example, and absolve casinos of any blame for designing a system which takes advantage of these individuals and sends them down a spiraling path of gambling addiction. Or, we can recognize that this is a contributing and amplifying factor, by paying close attention to what is happening to individuals in a broad sense, as well as smartly applying theory and hypothesis.

      I think it’s completely fair to say that this kid likely had a lot of contributing factors to his depression and ultimate and final decision. There is a clear hypothetical framework with some circumstantial evidence with strong theoretical support to suggest that AI are exacerbating the problem and also should be considered a contributing factor. This suggests that regulation may be helpful, or at the very least increased public awareness of this particular technology having the potential to cause harm to certain individuals.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        15 hours ago

        It does not matter what media or if its interactive or not. My point was about people not talking about real issues and just pointing with the finger to the boo-man, like videogames and now Ai tools. Besides videogames are also interactive, but that was never the point.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      1 day ago

      Drop the wild speculation, there is zero reason to play devil’s advocate. If you cared to do any reading there are a myriad of examples of this company’s llms pushing harmful behavior.

      Yes, there are probably other factors. There always are. It might not be what you meant, but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

        I am not saying that. Did you not read the last part of my reply?

        • Gamma@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 day ago

          I did, it was full of speculation based on something you admitted you had no idea about

          • thingsiplay@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 day ago

            It is not just speculation, it is a warning not to just believe alleged accusations. We saw this a lot of times with politicians too too, while pointing to Ai to hide their real problems. So I ask you, have you prove that all of the accusations are true that the kid died because of the Ai and there the kid had no suicidal problems before?

            But yes, its easy to say “you have no clue” instead of coming up with facts. Its easy this way to point with the finger and believe what you want to believe. Plus I said if its true at all, then I am for regulation. You instead ignore all of my points and say “you have no clue”. I wonder if you have any clue what you are talking about.

            Edit: And then you put stuff in my mouth I did not say at all. Just delusional. Believe what you want then and ignore real problems. Not worth my time here.