• Thorry84@feddit.nl
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    2 days ago

    An LLM cannot be anything other than a bullshit machine. It just guesses at what the next word would likely be. And because it’s trained on source data that contains truths as well as non truths, by chance sometimes what comes out is true. But it doesn’t “know” what is true and what isn’t.

    No matter what they try to do, this won’t change. And is one of the main reasons the LLM path will never lead to AGI, although parts of what makes up an LLM could possibly be used inside something that gets to the AGI level.