• 0 Posts
  • 906 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Is a bit hyperbole at the moment, where the concrete lawd are basically “os asks user for age on honor system and relays that to websites”. Linux distros can add that without much real controversy.

    Proven is some are seeking laws that require the os to actually verify age, which in practice means locking things behind something like a Google account and having an online account vendor process your real identity and really validate your age. Under such a regime, Linux desktop as it exists today becomes infeasible. Also Microsoft can say they absolutely cannot allow local accounts anymore by law and force Microsoft accounts…


  • LLMs can be useful in this context, but Anthropic blew Mythos way way out of proportion. It absolutely was overly hyped.

    Their own demonstrator had to work with a downlevel firefox so it would still have vulnerabilities that were already fixed before they even started.

    It seems that their narrative is that other tools, some LLM and some not may be as good or better than Mythos at finding issues, but there were a couple of issues where Mythos was able to actually create a demonstrator, which the other models did not do. Which is relatively less interesting, as a human going from finding to demonstrator is generally not a huge part of the tedium, the tedium usually is in the finding.

    They pitched it as “it is dangerous, it will escape confinement”, etc etc. But instead they had to explicitly start with a downlevel firefox with known vulnerabilities unpatched and they further had to disable all the security mitigations that in practice had already made the two “vulnerabilities” impossible to exploit.

    It’s a matter of degree and exaggeration.


  • Note that in this case, very specifically, they had to yank Firefox’s javascript engine out of Firefox "but without the browser’s process sandbox and other defense-in-depth mitigations.” They had to remove the mechanisms designed to quash vulnerabilities.

    And they had to test explicitly against Firefox 147 vintage because Firefox 148 had already fixed the two issues that Mythos exploited to get an impressive number. Before Mythos even ran the key problems had been found and patched…


  • The document from Anthropic purporting to be a security research work largely leaves things vague (marketing material vague) and declines to use any recognized standard for even possibly hinting about whether to think anything at all. They describe a pretty normal security reality (‘thousands of vulnerabilities’ but anyone who lives in CVE world knows that was the case before, so nothing to really distinguish from status quo).

    Then in their nuanced case study, they had to rip out a specific piece of firefox to torture and remove all the security protections that would have already secured these ‘problems’. Then it underperformed existing fuzzer and nearly all of it’s successes were based on previously known vulnerabilities that had already been fixed, but they were running the unpatched version to prove it’s ability.

    Ultimately, the one concrete thing they did was prove that if you fed Mythos two already known vulnerabilities, it was able to figure out how to explicitly exploit those vulnerabilities better than other models. It was worse at finding vulnerabilities, but it could make a demonstrator. Which a human could have done, and that’s not the tedious part of security research, the finding is the tedious part. Again, in the real world, these never would have worked, because they had to disable a bunch of protections that already neutered these “issues” before they ever were known.


  • Speaking generally…

    One is that it was pitched as a superhuman AI that could think in ways humans couldn’t possibly imagine, escaping any security measure we might think to bond it with. That was the calibrated expectation.

    Instead it’s fine at security “findings”, that a human could have noticed if they actually looked. For a lot of AI this is the key value prop, looking when a human can’t be bothered to look and seeing less than a human would, but the human never would have looked. For example a human can more reliably distinguish a needle from a straw of hay, but the relentless attention of an AI system would be a more practical approach for finding needles in haystacks. It will miss some needles and find some hay, so a human effort would have been better, but the AI is better than nothing, especially with a human to discard the accidental hay.

    Another thing is the nuance of the “vulnerabilities” may be very underwhelming. Anyone who has been in the security world knows that the vast majority of reported “vulnerabilities” are nothing burgers in practice. Curl had a “security” issue where a malicious command line could make it lock up instead of timeout if the peer stops responding. I’ve seen script engines that were explicitly designed to allow command execution get cves because a 4gb malicious script could invoke commands without including the exec directive, and also this engine is only ever used by people with unfettered shell access anyway. Had another “critical” vulnerability, but required an authorized user to remove and even rewrite the code that listens to the network to allow unsanitized data in that’s normally caught by the bits they disabled. Curl had another where they could make it output vulnerable c code, then the attacker would “just” have to find a way to compile the output of the command and they’d have a vulnerable c executable… How in the world are they able to get curl to generate c code and compile it but not otherwise be able to write whatever c code they want… Well no one can imagine it, but hey, why not a CVE…



  • The difference in your scenario is that it is enforcing a regulation, rather than being bound by it.

    Yes, enforcing a regulation, particularly with different requirements by geography is a nightmare. You have to translate the law to code, and make it conditional based on some mechanism of determining jurisdiction.

    However, a regulation like “you will ensure you will not require online connectivity for single player games, or if multiplayer you will ensure that third parties are able to keep hosting to keep the experience whole once you stop” is not a nightmare of nitpicky local regulations to navigate. The law doesn’t need to map to code, it just governs the human behavior/decisions.

    For example, there are various ‘password’ laws, and it’s no huge deal to comply, since you only have to honor some strictest common law and you don’t need software to implement the regulatory rules.


  • Don’t have a Framework, but I think it’s due to the whole ‘modern standby’ approach where the firmware doesn’t implement ‘standby’ anymore and just let’s the OS put everything into as low power state as possible, component by component.

    It doesn’t work well for Windows either, which is why a Windows laptop I have will ‘standby’ for maybe 15 minutes before shutting itself down for ‘hibernate’. I figure they decided that NVME means resume from hibernate is ‘good enough’ and modern standby is such a power hog that they can’t pull it off.

    Problem in Linux is that they view SecureBoot as a promise they cannot keep if they resume from disk, so they block hibernate if SecureBoot is enabled, making it hard to bank on as a reliable recourse.




  • I imagine you see the undue burden as a mandate to keep running the game servers yourself when you have no income to do so.

    Once upon a time, the norm for exclusively online games was to provide a hostable server so that any third party could host, because the game companies didn’t want to bother with hosting themselves, so at most they owned or outsourced a hosted registry of running servers, and volunteers ran instances.

    Then big publishers figured out that controlling the servers and keeping the implementation in-house was a good way to control the lifespan of games, and a number of games kept it closed.

    So the remedy is to return to allowing third party hosting, potentially including hooks for a third party registry for running game servers if we are talking more ephemeral online instances like you’d have in shooters. One might allow for keeping the serving in-house and only requiring third party serving upon plan to retire the in-house game.






  • It’s putting whatever you want and what you don’t want on the home screen, including for example launching into search.

    My phone stock launcher search dialog that once would have been to type the app name became a ‘multi-search’ that would do internet search and AI search and app search was sluggish and third set of results. So I go for a launcher that keeps the app search field just a quick name based search of applications.

    It does also do things like let me opt into fitting more icons on the screen at a time, since the default launcher has some ludicrous small number of icons on screen at a time.

    Also, the scrolling lets me scroll letters to rapidly get to apps starting with ‘m’ for example without typing, though I never use that.

    It also presents a different ‘folder’ design where a tap on it launches a default app from the group, and a quick slide opens it up to select a less popular, alternate app quickly.

    Also, two finger swipe from top takes me straight to typing app name to launch.

    Someone else I knew swapped launchers just to have a different wallpaper behavior that their stock launcher wouldn’t do.

    Currently using Octopi.



  • I think that would apply to people tricked into reading/watching AI slop video, but I think his definition is a likely one that could apply.

    You try to google search, you get an ‘ai overview’. In a bizarre scenario, DuckDuckGo made a big deal of asking the users and showing the users overwhelmingly wanted to skip AI results by default, and duckduckgo still defaults to AI summary unless you take measures to opt out.

    An analogy is dificult, but I suppose imagine a subway dropped off someone and there’s no stairs up, only a tunnel for a Tesla to take you to the next stop. You “use” a car, but were given no option to do otherwise because you were stuck underground and they forced you to take the car to carry on.

    In either case, his definition certainly is a likely one for a Gen Z respondant to be thinking when they respond “yes they use AI”. On the flip side some probably felt as you do and responded that they did not use AI, because they did not voluntarily do so.