• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    The thing that always bothered me about the Halting Problem is that the proof of it is so thoroughly convoluted and easy to fix (simply add the ability to return “undecidable”) that it seems wanky to try applying it as part of a proof for any kind of real world problem.

    • sunbeam60@lemmy.one
      link
      fedilink
      arrow-up
      1
      ·
      14 hours ago

      Yes. Like people, if you want the nuggets of gold, you need to go dig them out of the turds.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    14 hours ago

    It’s worth noting that humans aren’t immune to the problem either. The real solution will be to have a system that can do reasoning and have a heuristic for figuring out what’s likely a hallucination or not. The reason we’re able to do that is because we interact with the outside world, and we get feedback when our internal model diverges from it that allows us to bring it in sync.

  • Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    6
    ·
    17 hours ago

    This is a feature not a bug. Right wing oligarchs, a lot of them in tech, have been creaming their pants on the fantasy of shaping general consensus and privatizing culture for decades. LLM hallucination is just a wrench they are throwing on the machinery of human subjectivity.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      16 hours ago

      Uh, no. You want to be mad at something like that look into how they’re training models without a care for bias (or adding in their own biases).

      Hallucination is a completely different thing that is mathematically proven to happen regardless of who or what made it. Even if the model only knows about fluffy puppies and kitties it will still always hallucinate to some extent, just in that case it will be hallucinating fluffy puppies and kitties. It’s just random data at the end.

      That isn’t some conspiracy. Now if you expected a model that’s fluffy kitties and puppies and you’re mad because it starts spewing out hate speech - that’s not hallucination. That’s the training data.

      If you’re going to rage about something like that, you might as well rage about the correct thing.

      I’m getting real tired here of the “AI is the boogieman”. AI isn’t bad. We’ve had AI and Models for over 20 years now. They can be really helpful. The bias that is baked into them and how they’re implemented and trained has always been and will continue to be the problem.

      • AmbiguousProps@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        13 hours ago

        The AI we’ve had for over 20 years is not an LLM. LLMs are a different beast. This is why I hate the “AI” generalization. Yes, there are useful AI tools. But that doesn’t mean that LLMs are automatically always useful. And right now, I’m less concerned about the obvious hallucination that LLMs constantly do, and more concerned about the hype cycle that is causing a bubble. This bubble will wipe out savings, retirement, and make people starve. That’s not to mention the people currently, right now, being glazed up by these LLMs and falling to a sort of psychosis.

        The execs causing this bubble say a lot of things similar to you (with a lot more insanity, of course). They generalize and lump all of the different, actually very useful tools (such as models used in cancer research) together with LLMs. This is what allows them to equate the very useful, well studied and tested models to LLMs. Basically, because some models and tools have had actual impact, that must mean LLMs are also just as useful, and we should definitely be melting the planet to feed more copyrighted, stolen data into them at any cost.

        That usefulness is yet to be proven in any substantial way. Sure, I’ll take that they can be situationally useful for things like making new functions in existing code. They can be moderately useful for helping to get ideas for projects. But they are not useful for finding facts or the truth, and unfortunately, that is what the average person uses it for. They also are no where near able to replace software devs, engineers, accountants, etc, primarily because of how they are built to hallucinate a result that looks statistically correct.

        LLMs also will not become AGI, they are not capable of that in any sort of capacity. I know you’re not claiming otherwise, but the execs that say similar things to your last paragraph are claiming that. I want to point out who you’re helping by saying what you’re saying.