![](https://media.kbin.life/87/69/8769ac0d5666a15e289b4a8d2e0b5abcf536cb939531e83ca807847e55cb17d1.jpg)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
Hmm, the only issue I had was because it was using the DoH (which I don’t have a local server for). Once I disabled that, it was fine.
I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
Hmm, the only issue I had was because it was using the DoH (which I don’t have a local server for). Once I disabled that, it was fine.
Oh. Internal hosts, I just setup on my own DNS… No need for that. Printer, can’t say I’ve ever had a problem.
Yeah, I don’t really have a use at home for mDNS. None that I can think of, anyway. Pretty sure I was using it before MDNS was a thing.
They (the service that provides both web protection and logging) installs their own root certificate. Then creates certs for sites on demand, and it will route web traffic through their own proxy, yes.
It’s why I don’t do anything personal at all on the work laptop. I know they have logs of everything everyone does.
What if I told you, businesses routinely do this to their own machines in order to make a deliberate MitM attack to log what their employees do?
In this case, it’d be a really targetted attack to break into their locally hosted server, to steal the CA key, and also install a forced VPN/reroute in order to service up MitM attacks or similar. And to what end? Maybe if you’re a billionaire, I’d suggest not doing this. Otherwise, I’d wonder why you’d (as in the average user) be the target of someone that would need to spend a lot of time and money doing the reconnaissance needed to break in to do anything bad.
Sorry. I chose .local and I’m sticking to it.
I used to write z80 asm without an assembler back when I was a LOT younger. The ZX spectrum manual I had, had the full instruction list with the byte values.
I think it was oddly easier than some higher level languages for some tasks.
But, making changes was an utter nightmare.
I find anything with that coated plastic over time gets crappy. I still have an old X52 pro I’ve had for probably around 15 years now. In the end I just completely took off the flaking rubber style coating they put over it and it’s now shiny plastic and still going strong.
I also have a G502 that’s 6 years old. It has some worn areas where it’s actively held and on the buttons. I replaced the skates last year and have a spare set. Otherwise, still going strong.
Really not sure why I’d subscribe for something that lasts so long and isn’t THAT expensive to replace.
I’m going to blame the cloud for this. SaaS has got pretty much most software companies into the idea that they can have their cake and eat it with recurring revenue from cloud hosting their services.
This seems to have overflowed into every other market, where they want a piece of that pie.
I’m hoping it’s a fad that goes away. You know how we can make it a fad that goes away? Don’t buy into this shit.
How does Apple handle this?
Really not sure if they have any kernel level antivirus products. Although the same question applies I guess to third party hardware drivers. How are they installed? What privilege level do they run in?
Exactly. Either they’re going to make Windows Defender have the monopoly on antivirus and endpoint protection (EU will shut them down faster than a crowdstrike bluescreen), or they will need to grant the access to those providers.
If Microsoft think they will be able to curate every single device driver and other kernel module (like antivirus etc) and catch the kind of bug that caused this error? They’re deluded.
I’ll wait and see what they actually propose before outright ruling it out. But, I can’t see how they do this in any realistic way.
This was normal back in the days of CS 1.5/1.6. People would play at 640x480 on a monitor that could handle 1280x960 because they could drive 640x480 at like 150+hz.
I would have thought the plastic screwdriver was more likely to be able to adjust variable inductors/capacitors with minimal interference? Using a metal screwdriver you have to adjust, move it away check result since the presence of the screwdriver adjusts the result too.
Yeah, I have a problem too! No, wait. It’s because I don’t have an X/Twitter/whatever account.
Thanks. That explains a lot of what I didn’t think was right regarding the almost simultaneous failures.
I don’t write kernel code at all for a living. But, I do understand the rationale behind it, and it seems to me this doesn’t fit that expectation. Now, it’s a lot of hypothetical. But if I were writing this software, any processing of these files would happen in userspace. This would mean that any rejection of bad/badly formatted data, or indeed if it managed to crash the processor it would just be an app crash.
The general rule I’ve always heard is that you want to keep the minimum required work in the kernel code. So I think processing/rejection should have been happening in userspace (and perhaps even using code written in a higher level language with better memory protections etc) and then a parsed and validated set of data would be passed to the kernel code for actioning.
But, I admit I’m observing from the outside, and it could be nothing like this. But, on the face of it, it does seem to me like they were processing too much in the kernel code.
That’s interesting. We use crowdstrike, but I’m not in IT so don’t know about the configuration. Is a channel file, somehow similar to AV definitions? That would make sense, and I guess means this was a bug in the crowdstrike code in parsing the file somehow?
I think it’s most likely a little of both. It seems like the fact most systems failed at around the same time suggests that this was the default automatic upgrade /deployment option.
So, for sure the default option should have had upgrades staggered within an organisation. But at the same time organisations should have been ensuring they aren’t upgrading everything at once.
As it is, the way the upgrade was deployed made the software a single point of failure that completely negated redundancies and in many cases hobbled disaster recovery plans.
My favourite thing has been watching sky news (UK) operate without graphics, trailers, adverts or autocue. Back to basics.
It might not even be that. A lot of places have many servers (and even more virtual servers) running crowdstrike. Some places also seem to have it on endpoints too.
That’s a lot of machines to manually fix.
Setting up online accounts and allowing login via online accounts is fine. Forcing the use of an online account to use an operating system is not OK. They are actively blocking workarounds people use to setup their machine with a local account only.
Providing an easy (perhaps upon installation or first login) method to enable full disk encryption is a good thing. Automatically doing it without user intervention is not.
I would say that enabling it by default and offering a way to disable it before it happens on a laptop makes sense. I have bitlocker enabled on my laptop. But I cannot see any real reason to put it on my desktop. The number of cases where bitlocker on my desktop makes sense are too few to bother with the potential for problems it brings.
The two things are also linked, I suspect they will tie in your bitlocker unlock keys to the microsoft account they force you to login with on computer/windows setup. Should you lose access through any means you could lose access to your account, you’re one misclick/hardware change away from bricking your system.
I also wonder, say for example your Microsoft account becomes banned/deleted through some obscure TOS violation and your PC doesn’t have any local accounts configured. Are you locked out of your PC?
I’m not anti microsoft. I’m anti a lot of their recent actions, and cynical about their overall intentions regarding them.