• Croquette@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    10 hours ago

    LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.

    • zeca@lemmy.eco.br
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 hours ago

      They could be programmed to do some double/triple checking, and return “i dont know” when the checks are negative. I guess that would compromise the apparence of oracle that their parent companies seem to dissimulately push onto them.

    • Cyberflunk@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      7 hours ago

      Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        LLM are prediction engine. They don’t have knowledge, they only chain words together related to your topic.

        They don’t know they are wrong because they just don’t know anything period.