• 23 Posts
  • 1.06K Comments
Joined 6 years ago
cake
Cake day: May 31st, 2020

help-circle



  • It’s just really oversimplifying memory usage. OS designers had that same thought decades ago already, so they introduced disk caching. If data gets loaded from disk, then it won’t be erased from memory as soon as it isn’t needed anymore. It’s only erased, if something else requests memory and this happens to be the piece of “free” memory that the kernel thinks is the most expendable.

    For example, this is what the situation on my system looks like:

    free -h
                   total        used        free      shared  buff/cache   available
    Mem:            25Gi       9,8Gi       6,0Gi       586Mi       9,3Gi        15Gi
    

    Out of my 32 GiB physical RAM, 25 GiB happens to be usable by my applications, of which:

    • 9.8 GiB is actually reserved (used),
    • 9.2 GiB is currently in use for disk caching and buffers (buff/cache), and
    • only 6.1 GiB is actually unused (free).

    If you run cat /proc/meminfo, you can get an even more fine-grained listing.

    I’m sure, I could get the number for actually unused memory even lower, if I had started more applications since booting my laptop. Or as the Wikipedia article I linked above puts it:

    Usually, all physical memory not directly allocated to applications is used by the operating system for the page[/disk] cache.

    So, if you launch a memory-heavy application, it will generally cause memory used for disk caching to be cleared, which will slow the rest of your system down somewhat.

    Having said all that, I am on KDE myself. I do not believe, it’s worth optimizing for the speed of the system, if you’re sacrificing features that would speed up your usage of it. Hell, it ultimately comes down to how happy you are with your computer, so if it makes you happy, then even gaudy eye-candy can be the right investment.
    I just do not like these “unused RAM is wasted RAM” calls, because it is absolutely possible to implement few features while using lots of memory, and that does slow your system down unnecessarily.



  • Ephera@lemmy.mltoLinux@lemmy.mlKDE Plasma 6.6 released
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Yeah, I’ve done that occasionally, too, but it adds a load of friction for moving windows between screens, in particular also when un-/replugging the screen, so it’s still painful enough that I don’t bother with a second screen.

    I guess, it also plays a role that I do use lots of workspaces, so it’s 1) extra painful and 2) I don’t have as big of a need for a second screen, since I can just switch out what first screen displays very quickly.


  • Ephera@lemmy.mltoLinux@lemmy.mlKDE Plasma 6.6 released
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    Oh boy, feature freeze for Ubuntu 26.04 is on Thursday. Hopefully, they still include this update.

    My work laptop unfortunately comes with Kubuntu LTS and I desperately want the virtual-desktops-only-on-the-primary-screen feature on there. Currently, I’m the guy that actively disables all but one screen, because my workflow does not work at all with the secondary screen switching in sync with the primary screen.


  • Ephera@lemmy.mltoLinux@lemmy.mlKDE Plasma 6.6 released
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I still wouldn’t assume it to actually go further than that. It’s a limitation of the EWMH standard, which is used for controlling the placement of windows.

    I don’t have in-depth knowledge of the standard, but I assume, it can only represent 1 desktop as the active desktop and stuff like that.
    Maybe you could try to be clever by e.g. always reporting the active desktop of the active screen and stuff like that, but yeah, no idea if you can do that for all aspects of the standard, and whether applications will still behave as expected.








  • What I always find frustrating about that, is that even a colleague with much more Bash experience than me, will ask me what those options are, if I slap a set -euo pipefail or similar into there.

    I guess, I could prepare a snippet like in the article with proper comments instead:

    set -e # exit on error
    set -u # exit on unset variable
    set -o pipefail # exit on errors in pipes
    

    Maybe with the whole trapping thing, too.

    But yeah, will have to remember to use that. Most Bash scripts start out as just quickly trying something out, so it’s easy to forget setting the proper options…


  • I don’t have the Bash experience to argue against that, but from a general programming experience, I want things to crash as loudly as possible when anything unexpected happens. Otherwise, you might never spot it failing.

    Well, and nevermind that it could genuinely break things, if an intermediate step fails, but it continues running.


  • Huh, so if you don’t opt for these more specific number types, then your program will explode sooner or later, depending on the architecture it’s being run on…?

    I guess, times were different back when C got created, with register size still much more in flux. But yeah, from today’s perspective, that seems terrifying. 😅


  • What really frustrates me about that, is that someone put in a lot of effort to be able to write these things out using proper words, but it still isn’t really more readable.

    Like, sure, unsigned is very obvious. But short, int, long and long long don’t really tell you anything except “this can fit more or less data”. That same concept can be expressed with a growing number, i.e. i16, i32 and i64.

    And when someone actually needs to know how much data fits into each type, well, then the latter approach is just better, because it tells you right on the tin.



  • I think, the problem is that management wants the expert humans to use the non-expert tools, because they’re non-experts and don’t recognize that it’s slower for experts. There’s also the idea that experts can be more efficient with these tools, because they can correct dumb shit the non-expert tool does.

    But yeah, it just feels ridiculous. I need to think about the problem to apply my expertise. The thinking happens as I’m coding. If I’m supposed to not code and rather just have the coding be done by someone/-thing else, then the thinking does not occur and my expertise cannot guarantee for anything.
    No, I cannot just do the thinking as I’m doing the review. That’s significantly more time-consuming than coding it myself.