Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.

Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.

With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.

  • pleaseletmein@lemmy.zip
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    14 hours ago

    I had to delete my account on one site this morning for asking a question about this situation.

    The exact words I used were “I haven’t used ChatGPT, what will be changed when 4o is gone, and why is it upsetting so many people?” And this morning I woke up to dozens of notifications calling me a horrible human being with no empathy. They were accusing me of wanting people to harm themselves or commit suicide and of celebrating others’ suffering.

    I try not to let online stuff affect my mood too much, which is why I just abandoned the account rather than arguing or trying to defend myself. (I got the impression nothing I said would matter.) Not to mention, I was just even more confused by it all at that point.

    I guess this at least explains what kind of wasp’s nest I managed to piss off with my comment. And, I can understand why these people are “dating” a chatbot if that’s how they respond when an actual human (and not even one IRL, still just behind a screen) asks a basic question.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      It’s kind of a weird phenomenon that’s been developing on the internet for awhile called, “just asking questions”. It’s a way to noncommittally insert an opinion or try to muddy the waters with doubt, “Did you ever notice how every {bad thing} is {some minority}? I’m not saying I believe it, I’m just asking questions!” In this instance it seems that by even asking for a clear statement of value you are implying there may not be one, which is upsetting.

      To be clear, I’m not accusing you of doing this, but you can see how stumbling into a community that takes their own positions as entirely self evident would see any sort of questioning it as an attempt to undermine it. Anything short of full, unconditional acceptance of their position is treacherous.

      It’s worth thinking about because it’s a difficult and nuanced problem. Some things are unquestionable like when I say I love a bad movie or that human rights are inalienable. Still, I should be able to answer sincere questions probing into the whys of that and it really comes down to an assumption of bad faith or not.

    • Lvxferre [he/him]@mander.xyz
      link
      fedilink
      arrow-up
      5
      ·
      11 hours ago

      Ah, assumers ruining social media, as usual…

      If I got this right the crowd assumed/lied/bullshitted that 1) you knew why 4o is being retired, and 2) you were trying to defend it, regardless of being a potential source of harm. (They’re also assuming GPT-5 will be considerably better in this regard. I have my doubts).