Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
if AI is answering, yes.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.