• ThirdConsul@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    20 hours ago

    How would token prediction machine arrive at undecidable? I mean would you just add a percentage threshold? Static or calculated? How would you calculate it?

    (Why jfc? Because two people downvoted you? Dood, grow some.)

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      It’s easy to be dismissive because you’re talking from the frame of reference of current LLMs. The article is positing a universal truth about all possible technological advances in future LLMs.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        18 hours ago

        Then I’m confused what is your point on Halting Problem vis-a-vis hallucinations being un-mitigable qualities of LLMs? Did I misunderstood you proposed “return undecided (somehow magically, bypassing Halting Problem)” to be proposed solution?

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          First, there’s no “somehow magically” about it, the entire logic of the halting problem’s proof relies on being able to set up a contradiction. I’ll agree that returning undecidable doesn’t solve the problem as stated because the problem as stated only allows two responses.

          My wider point is that the Halting problem as stated is a purely academic one that’s unlikely to ever cause a problem in any real world scenario. Indeed, the ability to say “I don’t know” to unsolvable questions is a hot topic of ongoing LLM research.