• 23 Posts
  • 940 Comments
Joined 5 years ago
cake
Cake day: May 31st, 2020

help-circle


  • I mean, sure, I do understand what’s happening on a logical level. I’m just so baffled, because this whole internet thingamabob was architected by the military.
    It was intentionally built, so that parts of it could fail without disrupting the rest. When a corporation fucks up, it was supposed to take down the servers of that corporation, not also a good chunk of the rest.

    But unfortunately, this internet thingamabob is merely the closest approximation we have for the “perfect market” that economics theory calls for, so it still doesn’t actually self-regulate like that whole theory would love to believe.
    In fact, it is so much worse, because now monopolization happens across the whole planet. Particularly also because we don’t have a functioning “world government” that could enforce competition at that level via laws.

    So, the network leads to companies monopolizing on top of it and then monopolies necessitate that the respective companies do as poor of a job as possible, because this reduces costs and increases profits. As a result, major parts of this military-grade internet now falter every few weeks.


  • Oh man, these global outages are really getting out of hand. A few days after the recent AWS and Azure outages, I suddenly noticed that I couldn’t reach certain webpages anymore. And I genuinely didn’t even bother trying to debug, because I just assumed that it’s another global outage.

    In the evening, I did look into it and noticed that my router was at fault (presumably DNS got bugged by a recent update). That was just wild to me, that I genuinely deemed it more likely that several major webpages went offline together than that my home setup is fucky.






  • As the other person said, the bit about Arch is just the preamble.
    But you can use Nix Home-Manager on Arch (or other distros), if you’re so inclined, which will give you that reproducibility for the stuff in your home-directory.

    In some ways, this is like backing up and restoring your dotfiles, but it allows you to template those dotfiles and depending on the program, it offers simple ways to populate the dotfile templates. For example, KDE applications don’t generally offer very legible dotfiles and so configuring e.g. a panel via dotfiles is kind of a pain. To help with this, there’s Nix Plasma-Manager.



  • The thing I never understood about PowerShell is that it’s partially more verbose than C#, which is one of the most verbose programming languages in existence. It just feels like you might as well go for a full-fledged programming language at that point.

    The appeal of Bash et al is that the scripting is almost the same as the interactive usage, which you already know. But because PowerShell is so verbose, I’m really not sure people do use it interactively.

    I guess, that code snippet in the article makes somewhat of a difference, in that PowerShell offers better features for interop between processes. But man, that still feels like it could’ve been a library instead…






  • I agree in general, that a crash is much better than silently failing, but well, to give you some of the nuance I’ve already mostly figured out:

    • In a script or CLI, you may never need to move beyond just crashing.
    • In a GUI application or app, a crash may be good (so long as unsaved data can be recovered), but you likely need to collect additional information for what the program was doing when the crash happened.
    • In a backend service, a crash can be problematic when it isn’t actually necessary, since it can be abused for Denial-of-Service attacks. Still infinitely better than failing silently, but yeah, you gotta invest into logging, monitoring and alerting, so you don’t need to crash to make it visible.
    • In a library, you generally don’t want to trigger a crash, unless an irrecoverable error happens, because you don’t know where it’ll be used.

  • Currently implementing error handling for a library I’m building and the process is basically to just throw all of the information I can find into there. It makes the error handling code quite verbose, but there’s no easy way for me to know whether the underlying errors expose that information already, so this is actually easier to deal with. 🫠



  • However there are things when the Ai is helpful, especially for writing tests in a restrictive language such as Rust.

    For generating the boilerplate surrounding it, sure.
    But the contents of the tests are your specification. They’re the one part of the code, where you should be thinking what needs to happen and they should be readable.

    A colleague at work generated unit tests and it’s the stupidest code I’ve seen in a long while, with all imports repeated in each test case, as well as tons of random assertions also repeated in each test case, like some shotgun-approach to regression testing.
    It makes it impossible to know which parts of the asserted behaviour are actually intended and which parts just got caught in the crossfire.