Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • Knot@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    17 hours ago

    I get that this thread started from a joke, but I think it’s also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions

    It’s also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people’s hunches, it’s not sad, it’s a good thing!