• 0 Posts
  • 160 Comments
Joined 3 years ago
cake
Cake day: June 25th, 2023

help-circle
  • I’m just gonna chime in to point out those are both realtime combat focused games, requiring reflex and quick thinking, which is a notable departure from roguelike. On the other hand, Slay the Spire is all about careful planning, making decisions one step at a time, taking calculated risks. There’s no turn time limit, no time-based combos or bonuses, or time-gated doors that give you extra items if you go fast enough.

    Oh, and also meta-progression. Hades and Dead Cells are both built with a central system of grinding out unlocks and upgrades - in slay the spire, the only meta-progression I know is having to beat the game with each character to unlock the next, and having to complete a few runs with each character to unlock all cards… And then the real progression, where you can continue beating the game with increasing difficulty levels to unlock the next.

    Ultimately, this might not matter for you, and even if it does, a slow strategic deckbuilder might still not be for you, and that’s completely fine.



  • some of my games didn’t launch, complaining about missing stuff.

    I don’t know Slackware, but I know on arch there’s the standard steam runtime version, and then there’s the unofficial steam-native-runtime, which uses system packages instead of steam’s own bundled runtime. And if we’re talking native Linux games, which is where the problem is, they tend to not work with steam’s runtime, presumably because they weren’t properly built to target it, and need to be launched with the native runtime (or switch to running the windows version with proton…)




  • I use KDE, but for my file manager I stick to Thunar, which I think is from a fork of GNOME. Does cause me some issues, since Thunar uses gvfs for stuff like mounting USB drives, whereas plasma loads kio, seemingly with no way to disable it, and they fight for control over devices.

    I remember one thing in particular that pissed me off about Dolphin is how it displays folders with 4 tilted miniature icons of files inside, with no way to turn it off, or even just make them not be randomly tilted. Such a minor thing, but when I was choosing it was between clean icons and a scrambled mess, I went with clean icons.

    Ultimately, I wish gvfs/kio wasn’t an issue, but I love to have the freedom to choose.


  • One counterpoint - even with a weak speed to capacity ratio it could be very useful to have a lot of storage for incremental backup solutions, where you have a small index to check what needs to be backed up, only need to write new/modified data, and when restoring you only need to read the indexes and the amount you’re actually restoring. This saves time writing the data and lets you keep access to historical versions.

    There’s two caveats here, of course, assuming those are not rewritable. One, you need to be able to quickly seek to the latest index, which can’t reliably be at the start, and two, you need a format that works without rewriting any data, possibly with a footer (like tar or zip, forgot which one), which introduces extra complexity (though I foresee a potential trick where the previous index can leave an unallocated block of data to write the address of the next index, to be written later)




  • Shouldn’t it be more efficient to download only the changes and patch the existing files?

    As people mentioned, that becomes problematic with a distro like arch. You could easily be jumping 5-6 versions with an update, with some more busy packages and updating less frequently. This means you need to go through the diffs in order, and you need to actually keep those diffs available.

    This actually poses two issues, and the first one is that software usually isn’t built for this kind of binary stability - anything compiled/autogenerated might change a lot with a small source change, and even just compressing data files will mess it up. Because of that, a diff/delta might end up not saving much space, and going through multiple of them could end up bigger than just a direct download of the files.

    And the second issue is, mirrors - mirrors need to store and provide a lot of data, and they’re not controlled by the distribution. Presumably to save on space, they quickly remove older package versions - and when I say older, I mean potentially less than a week old. In order for diffs/deltas to work, you’d need the mirrors to not only store the full package files they already do (for any new installs), but now also store deltas for N days back, and they’d only be useful to people who update more often than every N days.




  • I think you’re wrong about one thing - it’s not about compute cost, but about complexity of accounting for latency. You could check if the player can see the enemy they’re claiming to have shot, but you really need to check if they feasibly could’ve seen the enemy on their computer at the time they sent the packet, and with them also having outdated information about where the enemy was.

    The issue gets more complex the more complex the game logic is. Throw physics simulation into the mix and the server and clients can quickly diverge from small differences.

    Ultimately, compensating for lag is convoluted, can still cause visible desync for clients (see people complaining about seeing their shots connect in CS2 without doing damage), and opens up potential issues with fake lag.

    More casual games will often simply trust the client, since it’s better for somebody to, say, fly around on an object that’s not there for other players, than for a laggy player to be spazzing out and rubberbanding on their screen, unable to control their character.




  • Both java and go seem excessively complex at runtime for fundamental system utilities, featuring garbage collection. Rust, on the other hand, keeps the complexity in the compiler and source, keeping the runtime code simpler. And of course it’s doing that while trying to make it easier to manage memory and harder to make mistakes, without forcing extra runtime logic on you.


  • I think most of the work is in the fact that there often isn’t an “equivalent call”, and it can be quite a lot of code to make it work. One funny thing is the whole esync-fsync-ntsync issue, where synchronization is done differently on Linux and on windows, and translating it was a big performance hit, and difficult to do accurately. If I understood correctly, esync, fsync and ntsync were a series of kernel patches implementing additional synchronization code in the kernel, with ntsync actually replicating the windows style.