I really don’t have this experience with ChatGPT. Every once in a while, ChatGPT returns an answer that doesn’t seem legitimate, so I ask, “Really?” And then it returns, “No, that is incorrect.” Which… I really hope the robots responsible for eliminating humans are not so hapless. But the stories about AI encouraging kids to kill themselves or mentioning books that don’t exist seem a little made up. And, like, don’t get me wrong: I want to believe ChatGPT listed glue as a good ingredient for making pizza crust thicker… I just require a bit more evidence.
I really don’t have this experience with ChatGPT. Every once in a while, ChatGPT returns an answer that doesn’t seem legitimate, so I ask, “Really?” And then it returns, “No, that is incorrect.” Which… I really hope the robots responsible for eliminating humans are not so hapless. But the stories about AI encouraging kids to kill themselves or mentioning books that don’t exist seem a little made up. And, like, don’t get me wrong: I want to believe ChatGPT listed glue as a good ingredient for making pizza crust thicker… I just require a bit more evidence.
Tech bro outed
Are you a librarian?
I am not.