Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”



Or how bad something is. “I don’t need a scientific study to tell me that looking at my phone before bed will make me sleep badly”, but the studies actually show that the effect is statistically robust but small.
In the same way, studies like this can make the distinction between different levels of advice and warning.
I remember discussing / doing critical appraisal of this. Turns out it was less about the phone and more about the emotional dysregulation / emotional arousal causing delay in sleep onset.
So yes, agree, we need studies, and we need to know how to read them and think over them together.