• 0 Posts
  • 67 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle

  • It’s thanks to Gentoo that I’ve been a Linux sysadmin for over 20 years. That being said, I’ve since moved to Arch and then Debian.

    Some points: On modern systems you won’t really notice any speed improvements from custom compiling the packages. Apart from maybe some numbers in articial benchmarks. On old systems with very limited resources, you can eke out a bit of more performance. Back when I was still using Gentoo, my proudest moment was getting a Pentium 1 with 96MB Ram (Yes, MB!), which was a gift of a colleague to his broke brother, into quite a useable little machine. Browsing, listening to MP3s, email, some simple games.

    I also noticed a noticable improvment in performance in a 400mhz Athlon I had setup for my mom.

    That being said, I was only able to do this, because I was using distCC to distribute compiling across several machines to keep compile times to a somewhat sane level. Also, I was doing a unpaid internship at the time, so I basically had all the time in the world, so compile times didn’t really bother me.

    I had tried to use linux before. After Windows XP crashed one too many times. I decided to see how things work on Linux. I initially chose a “easy to use” desktop distro. (Mandrake Linux). Got everything setup. Even 3D Accelaration worked. Everything was really nice and fun. Then I tried to tinker under the hood and I broke something that I couldn’t figure out how to fix. So I thought, maybe I need to find something even easier, so I chose Suse Linux. Same story. Set everything up. Desktop working, 3D working, etc… start to tinker, break something, back to square zero.

    Then I decided to change my approach and choose the hardest distro. The choice was between Linux from Scratch and Gentoo. Linux from Scratch sounded waay to painful, so I chose Gentoo.

    It took me 3 days until I had a somewhat working system without a desktop. Then another 3 days until I had a desktop running Fluxbox.

    But the learning experience was invaluable. Being forced to use the CLI and not only that, but more or less configure everything by hand. It takes aways the fear of the CLI and you get a feel for where everything is located in the filesystem, which config files do what, etc… It demystifies the whole thing substantially.

    You suddenly realize that nothing is hidden from you. You are not prevented from accessing anything or tinkering with it.

    The downside is that Gentoo takes a lot of time and effort to maintain. But the learning potential is invaluable. Especially if you use it to also start doing little projects in linux. e.g. File server, router, firewall, etc…

    Me knowing Gentoo, got my first real job as Linux Sysadmin and before long I was training rookie Admins. And the first thing I always did with them was to run them through the Gentoo bootcamp.

    Once they go to grips with that, everything else wasn’t that difficult.











  • If you want to learn more about computers by using Linux, I suggest something like Gentoo. Don’t know if it’s still the case, but I started with Gentoo back in 2003 and it took me 3 days until I even had a GUI. Learned a ton in the process about Linux under the hood and how it all works together. Thanks to Gentoo I have a well paid career as a Senior Linux System Administrator.

    That being said, i should mention that I grew up with DOS, so I didn’t have the same apprehension as some people, when it comes to the command line and editing config files.







  • If the immutability in OS is well designed, then there shouldn’t be really an downsides or loss in comfort. That is, unless you’re a linux expert and like to tinker under the hood.

    The general idea is, the core of the OS if read-only, and everything else that needs to be modified is mounted writeable. Ideally, protecting the core of the OS from writes, should for example prevent malware from installing a modified kernel or boot loader. Or maybe preventing the user from accidentally borking something so that their system becomes unbootable. How much of an advantage that is practice is dependent on use case. In the case of Steam OS on the steam deck, it’s perfect, since boot issues on the steam deck could potentially be tricky to fix as opposed to a standard PC.

    Another advantage of immutable could theoretically be wear and tear of certain storage devices. e.g. Think of a raspberry PI and SDcards. If you could have most of the important stuff of the OS as read only on the SD card, and everything else on a usb disk or even an NFS mount, then the SD card should last much longer since no writes are happening on it.

    As far as true security benefit is concerned… I can’t really say. It depends on how updates and eventual writes are actually handled to the immutable part of the OS. Obviously at some point, changes do happen. Like during a system update. In the case of Steam OS, The system portion is wiped and replaced the new version. Chimera OS, did something similar (I don’t know if they still use the same method). They had a read-only BTRFS partition, where they would then provide a new snapshot during an update, which would be downloaded and applied at the next reboot. This approach would hinder automated crypto malware for example (at least for system files).


  • Immutable in this context refers to an OS that can’t be changed while running. Steam deck does something like that. Basically the all of the OS system files are read only, so that the user or some malware can’t Bork the system. The only parts that are writable are the users profile directory and the logs.

    You can still receive updates and install apps. It’s just that that’s handled a bit differently than with a standard OS.

    E.g. it could be that the OS provider only issues complete updates, and then you either have to reboot. This is the case with steam os on the steam deck. The System portion of the OS is mounted read only during use.