• 0 Posts
  • 276 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle







  • It is absolutely stupid, stupid to the tune of “you shouldn’t be a decision maker”, to think an LLM is a better use for “getting a quick intro to an unfamiliar topic” than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.

    As for “looking up facts when you have trouble remembering it”, using the lie machine is a terrible idea. It’s going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you’d be better off finding a reputable source. If I type in “how do i strip whitespace in python?” an LLM could very well say “it’s your_string.strip()”. That’s wrong. Just send me to the fucking official docs.

    There are probably edge or special cases, but for general search on the web? LLMs are worse than search.











  • Well, in this example, the information provided by the AI was simply wrong. If it had done the traditional search method of pointing to the organization’s website where they had the hours listed, it would have worked fine.

    This idea that “we’re all entitled to our opinion” is nonsense. That’s for when you’re a child and the topic is what flavor Jelly Bean you like. It’s not for like policy or things that matter. You can’t just “it’s my opinion” your way through “this algorithm is O(n^2) but I like it better than O(n) so I’m going to use it for my big website”. Or more on topic, you can’t use it for “these results are wrong but I like them better”