I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 147 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • Setting up online accounts and allowing login via online accounts is fine. Forcing the use of an online account to use an operating system is not OK. They are actively blocking workarounds people use to setup their machine with a local account only.

    Providing an easy (perhaps upon installation or first login) method to enable full disk encryption is a good thing. Automatically doing it without user intervention is not.

    I would say that enabling it by default and offering a way to disable it before it happens on a laptop makes sense. I have bitlocker enabled on my laptop. But I cannot see any real reason to put it on my desktop. The number of cases where bitlocker on my desktop makes sense are too few to bother with the potential for problems it brings.

    The two things are also linked, I suspect they will tie in your bitlocker unlock keys to the microsoft account they force you to login with on computer/windows setup. Should you lose access through any means you could lose access to your account, you’re one misclick/hardware change away from bricking your system.

    I also wonder, say for example your Microsoft account becomes banned/deleted through some obscure TOS violation and your PC doesn’t have any local accounts configured. Are you locked out of your PC?

    I’m not anti microsoft. I’m anti a lot of their recent actions, and cynical about their overall intentions regarding them.






  • What if I told you, businesses routinely do this to their own machines in order to make a deliberate MitM attack to log what their employees do?

    In this case, it’d be a really targetted attack to break into their locally hosted server, to steal the CA key, and also install a forced VPN/reroute in order to service up MitM attacks or similar. And to what end? Maybe if you’re a billionaire, I’d suggest not doing this. Otherwise, I’d wonder why you’d (as in the average user) be the target of someone that would need to spend a lot of time and money doing the reconnaissance needed to break in to do anything bad.




  • I find anything with that coated plastic over time gets crappy. I still have an old X52 pro I’ve had for probably around 15 years now. In the end I just completely took off the flaking rubber style coating they put over it and it’s now shiny plastic and still going strong.

    I also have a G502 that’s 6 years old. It has some worn areas where it’s actively held and on the buttons. I replaced the skates last year and have a spare set. Otherwise, still going strong.

    Really not sure why I’d subscribe for something that lasts so long and isn’t THAT expensive to replace.








  • Thanks. That explains a lot of what I didn’t think was right regarding the almost simultaneous failures.

    I don’t write kernel code at all for a living. But, I do understand the rationale behind it, and it seems to me this doesn’t fit that expectation. Now, it’s a lot of hypothetical. But if I were writing this software, any processing of these files would happen in userspace. This would mean that any rejection of bad/badly formatted data, or indeed if it managed to crash the processor it would just be an app crash.

    The general rule I’ve always heard is that you want to keep the minimum required work in the kernel code. So I think processing/rejection should have been happening in userspace (and perhaps even using code written in a higher level language with better memory protections etc) and then a parsed and validated set of data would be passed to the kernel code for actioning.

    But, I admit I’m observing from the outside, and it could be nothing like this. But, on the face of it, it does seem to me like they were processing too much in the kernel code.



  • I think it’s most likely a little of both. It seems like the fact most systems failed at around the same time suggests that this was the default automatic upgrade /deployment option.

    So, for sure the default option should have had upgrades staggered within an organisation. But at the same time organisations should have been ensuring they aren’t upgrading everything at once.

    As it is, the way the upgrade was deployed made the software a single point of failure that completely negated redundancies and in many cases hobbled disaster recovery plans.