• 47 Posts
  • 377 Comments
Joined 2 years ago
cake
Cake day: March 19th, 2024

help-circle
  • Well, yes, but that is not exclusive to Pixels, and in fact, most phones (other than the latest iPhones) are more vulnerable. Pixels, especially the latest devices, have the best hardware security features of any Android phone (unfortunately). You’re focused on Pixel, but that’s only because of the recent leaks which specifically focused on Pixel because of their breaching difficulty. Here’s the full matrix from last year (which hasn’t leaked as recently):

    https://discuss.grapheneos.org/d/14344-cellebrite-premium-july-2024-documentation

    GrapheneOS, even now, is not vulnerable for several reasons, most of which tie into the hardware features of the Pixel. There’s a reason Graphene only works on Pixel.

    All I’m saying is that it’s entirely misleading to imply that only Pixels are vulnerable. This is not the case, even for iPhones.

    I’m also not sure why you seem to be trying to say I disagree on the fact that Google is happy to leave vulnerabilities wide open, when that is exactly what I said in my original comment. Their new release schedule allows them to leave these vulnerabilities open for an even longer time, making Cellebrite’s job easier.





  • AmbiguousProps@lemmy.todaytoTechnology@lemmy.mlLLMs Will Always Hallucinate
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    14 days ago

    The AI we’ve had for over 20 years is not an LLM. LLMs are a different beast. This is why I hate the “AI” generalization. Yes, there are useful AI tools. But that doesn’t mean that LLMs are automatically always useful. And right now, I’m less concerned about the obvious hallucination that LLMs constantly do, and more concerned about the hype cycle that is causing a bubble. This bubble will wipe out savings, retirement, and make people starve. That’s not to mention the people currently, right now, being glazed up by these LLMs and falling to a sort of psychosis.

    The execs causing this bubble say a lot of things similar to you (with a lot more insanity, of course). They generalize and lump all of the different, actually very useful tools (such as models used in cancer research) together with LLMs. This is what allows them to equate the very useful, well studied and tested models to LLMs. Basically, because some models and tools have had actual impact, that must mean LLMs are also just as useful, and we should definitely be melting the planet to feed more copyrighted, stolen data into them at any cost.

    That usefulness is yet to be proven in any substantial way. Sure, I’ll take that they can be situationally useful for things like making new functions in existing code. They can be moderately useful for helping to get ideas for projects. But they are not useful for finding facts or the truth, and unfortunately, that is what the average person uses it for. They also are no where near able to replace software devs, engineers, accountants, etc, primarily because of how they are built to hallucinate a result that looks statistically correct.

    LLMs also will not become AGI, they are not capable of that in any sort of capacity. I know you’re not claiming otherwise, but the execs that say similar things to your last paragraph are claiming that. I want to point out who you’re helping by saying what you’re saying.









  • If it’s missing data (such as locations) that is the issue, then you can update the map yourself and help others migrate at the same time. Every little bit helps, even if you don’t plan on fully moving over. I’ve done over a thousand changes to my local area and it’s actually more accurate than Google Maps in a lot of the commercial areas. You don’t have to do a thousand things though, like I said, every little bit helps.

    Of course, it doesn’t help for outside of your area if you only do changes locally, but if enough people were willing to update the map, things could change.