Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?
I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.
I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.
Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…
You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I’d like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just “make bounce” and the software starts up without me having to remember all the app-specific commands and configs.
If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called “up next” that details where you’re running into challenges and need to make improvements.
As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.
It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.
Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.
The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an “IT Guy”, although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I’ve been doing this for a while.
I really don’t know how people without a similar level of experience can even begin to cope.
🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.
The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it’s solid, implement the next tool or deployment.
My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.
Although I don’t work in IT so maybe the small bits of maintenance I actually do feel less to me?
I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.
I love cli and config files, so I can write some scripts to automate it all.
It documents itself.
Whenever I have to do GUI stuff I always forget a step or do things out of order or something.Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.
You’re not alone.
The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it’s not in a good state with all the supply-chain risks.
At the same time, many ‘help’ articles are karma-farming ‘splogs’ of low quality and/or just slop that they’re not really useful. When something’s missing, it feels to our imposter syndrome like it’s a skills issue.
Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too ontricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it’s atomic and rolls back easily).
You don’t need to come home only to work. This is supposed to be FUN for some of us. Don’t chase the Joneses, but just do what you want.
Once you’ve simplified, get in the habit of going outside. You’ll feel a lot better about it.
That’s true, I’ve done a lot of stuff as testing that I thought would be useful services but then never really got used by me, so I didn’t maintain.
I didn’t take the time to really dive in and learn Docker outside of a few guides, probably why is a struggle…
It’s a mess. I’m even moving to a different field in it due to this.
Sounds like you haven’t taken the time to properly design your environment.
Lots of home gamers just throw stuff together and just “hack things till they work”.
You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don’t just file follow the weirdly -opinionated setup instructions. Make it fit your standard.
This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.
It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.
only 1gbE
What needs more than 1gbe? Are you streaming 8k?
Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.
And automate. There are tools to help with this.
Also on top of that, find time to keep it up to date. If leave it rot things will get harder to maintain.
I sit down once a week and go over all the updates needed, both the docker hosts and all the images they run.
If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.
I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.
Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.
That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole
That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole.
Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
I work in IT and like most we’re also a Windows shop. I have zero professional experience with Linux but I’m learning through my home lab while simultaneously trying extract myself from the privacy cluster fuck that is the current consumer tech industry. It’s a transition and the documentation I find more or less matches the OPs experience.
I research, pick what seems to be the best for my situation (often most popular), get it working with sustainable, minimal complexity, and in short time find that some small, vital aspect of its setup (like reverse proxy) has literally zero documentation for getting it to work with some other vital part of my setup. I guess I should have made a better choice 18 months ago when I didn’t expect to find this new service accessible. I find some two year old Github issue comment that allegedly solves my exact problem that I can’t translate to the version I’m running because it’s two revisions newer. Most other responses are incomplete, RTFM, or “git gud n00b”, like your response here
Wherever you work, whatever industry, you can get burnt out. It’s got nothing to do with if you’ve “got what it takes” or whatever bullshit you think “you’re in the wrong field of work and you’re trying to jam a square peg in a round hole” equates to.
I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.
If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.
You’ve completely misread everything I’ve said.
Let’s make a few things clear here.
My response is not “Git gud”. My response is that sometimes there are selfhosted projects that are really cool and many people recommend, but the set up for them is genuinely more complex than it should be, and you’re better off avoiding them instead of banging your head against a wall and stressing yourself out. Selfhosting should work for you, not against you. You can always take another crack at a project later when you’ve got more hands on experience.
Secondly, it’s not a matter of whether OP “has what it takes” in his career. I simply pointed out the fact that everything he seems to hate about selfhosting, are fundamental core principals of working in IT. My response to him isn’t that he can’t hack it, it seems more like he just genuinely doesn’t like it. I’m suggesting that it won’t get better because this is what IT is. What that means to OP is up to him. Maybe he doesn’t care because the money is good which is valid. But maybe he considers eventually moving into a career he doesn’t hate, and then the selfhosting stuff won’t bother him so much. As a matter of fact, OP himself didn’t take offense to that suggestion the way you did. He agreed with my assessment.
As you learn more about self hosting, you’ll find that certain things like reverse proxy set up isn’t always included in the documentation because it’s not really a part of the project. How reverse proxies (And by extension http as a whole) work is a technology to learn on its own. I rarely have to read documentation on RP for a project because I just know how reverse proxying works. It’s not really the responsibility of a given project to tell you how to do it, unless their project has a unique gotcha involved. I do however love when they do include it, as I think that selfhosting should be more accessible to people who don’t work in IT.
I agree with that 3rd paragraph lol. That’s probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn’t work and you have to troubleshoot that now too?
Without going into too much detail, I’m a solo operation guy for about 200 end users. We’re a Win11 and Office shop like most, and I’ve upgraded pretty much every system since my time starting. I’ve utilized some self-host options too, to help in the day to day which is nice as it offloads some work.
It’s just, especially after a long day, to play IT at home can be a bit much. I don’t normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple’s and oranges sometimes. Maybe I’m getting spoiled with all the turn key stuff at work, too.
What is your setup? I have TrueNAS and there I use the apps that are easy to install (and the catalog is not small) and maintain. Basically from time to time I just come and update (one button click). I have networking separate and I had issues with Tailscale for some time, but there I had only 4 services in total, all docker containers and all except the Tailscale straight forward and easy to update. Now I even moved those. One as a custom app to TrueNAS and the rest to proxmox LXC - and that solved my tailscale issue as well. And I am having a good time. But my rule of thumb - before I install anything I ask myself if I REALLY need this, because otherwise I would end up with like a jillion services that are cool, but not really that useful or practical.
I think what I would recommend to you, find platform like TrueNAS, where lots of things is prepared for you and don’t bother too much with the custom stuff if you don’t enjoy. Also I can recommend having a test rig or VM so that you can always try first, if its easy to install and stable to use. There were occasions when I was trying stuff and it was just bothersome, I had to hack stuff and I was glad in the end I didn’t “pollute” my main server with it.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code LAMP Linux-Apache-MySQL-PHP stack for webhosting LXC Linux Containers Plex Brand of media server package RPi Raspberry Pi brand of SBC SBC Single-Board Computer SSO Single Sign-On
6 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.
[Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]
I deliberately have not used docker at home to avoid complications. Almost every program is in a debian/apt repo, and I only install frontends that run on LAMP. I think I only have 2 or 3 apps that require manual maintenance (apart from running “apt upgrade”). NextCloud is 90% of the butthurt.
I’m starting to turn off services on IPv4 to reduce the network maintenance overhead.
I manage all my services with systems. Simple services like kanidm, that are just a single native executable run baremetal with a different user. More complex Setups like immich or anything that requires a pzthon venv runs from a docker compose file that gets managed by systemd. Each service has its own user and it’s own directory.
honestly, i 100% do not miss GUIs that hopefully do what you want them to do or have options grayed out or don’t include all the available options etc etc
i do get burnout, and i suffer many of the same symptoms. but i have a solution that works for me: NixOS
ok it does sound like i gave you more homework, but hear me out:
- with NixOS and flakes you have a commit history for your lab services, all centralized in one place.
- this can include as much documentation as you want: inline comments, commit messages, living documents in your repository, whatever
- even services that only provide a Docker based solution can be encapsulated and run by Nix, including using an alternate runtime like podman or containerd
- (this one will hammer me with downvotes but i genuinely do think that:) you can use an LLM agent like GitHub Copilot to get you started, learn the Nix language and ecosystem, and create Nix modules for things that need to be wrapped. i’ve been a software engineer for 15 years; i’ve got nothing to prove when it comes to making a working system. what i want is a working system.
Selfhoster on NixOS here too.
Nix (and operating services on a NixOS machine) is a learning curve, and even though tho project is over 10 years old now the semantic differences between the conventional approach to distro design/software development/ops is still a source of friction. But the project has come a long way and lots of popular software is packaged and hostable and just works (when you are aware of said semantic differences)
But when it works, and it often it does, it’s phenomenal and a very well integrated experience.
The problem in my exparience with using LLMs to assist is that the declarative nature of Nix makes them prone to hallucination: “Certainly, just go
services.fooService.enable = true;in yourconfiguraton.nixand you’re off to the races”. OTOH, because nix builds are hermetic and functional they’re pretty safe to include as a verification tool that something like Claude code can use to iterate on a solution.There are some pretty good examples of selfhosting system configurations one can use as inspiration. I just discovered github.com/firecat53/nixos that is an excellent example of a modular system configuration that manages multiple machines, secrets, and self hosted services.
I will check that out even though, yes is homework lol.
And +1 for the contribution to help a stranger out!
Lost me at LLMs. My Nix config is over 20k lines long at this point, neatly split into more than a hundred modules and managing 8 physical machines and 30+ VMs. I love it.
But every time I’ve tried to use an LLM for nix, it has failed spectacularly.
Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.
+1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.
Why did I never think of that?! That would make sense lol. Thank you!
No problem. I have been using it for a while and I really like it. There’s nothing stopping you from doing it the old fashioned way if you find you don’t like portainer but once you familiarize yourself with it I think you’ll be hooked on the concept.
I’m sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities. The environment is already suited for the program. Just give me a standard installer for the love of tech.
You can still use VMs and do containers in there. That’s what I do, makes separating different services very easy.
NixOS for the win! Define your system and services, run a single command, get a reproducible, Proxmox-compatible VM out of it. Nixpkgs has basically every service you’d ever want to selfhost.
I thought that was the point of supporting OCI in the latest version so you can pull docker images and run them like an lxc container
Not trying to start any measuring contest, but what I’ve learned is that there are always people out there that does things 100x more than I do. So yes, 1500 Docker composes are a thing, and I’ve witnessed some composes with over 10k lines.
That doesn’t sound the least bit fun lol







