• EzTerry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    28 days ago

    You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)

    The point the people (or llm arguing against llms) miss is the world is not deterministic, humans are not deterministic (at least in a practical way at the human scale). And if a system is you should indeed not use an llm… Its powere is how it provides answers with messy data… If you need repeatability make a scripts / code ect.

    (Note I do think if the output is for human use it’s important a human validate its useful… The llms can help brainstorm, can with some tests manage a surprising amount of code, but if you don’t validate and test the code it will be slop and maybe work for one test but not for a generic user.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      28 days ago

      You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)

      There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.

      I actually think LLMs are ill suited for the vast majority of things people are currently using them for, and there are obviously the ethical problems with data centers bringing new fossil fuel power sources online, but the technology is interesting in and of itself

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          27 days ago
          1. Floating point math is deterministic.
          2. Systems don’t have to be programmed with race conditions. That is not a fundamental aspect of an LLM, but a design decision.
          3. Systems don’t have to be programmed to tie break with random methods. That is not a fundamental aspect of an LLM, but a design decision.

          This is not hard stuff to understand, if you understand computing.