How do y’all manage all these Docker compose apps?

First I installed Jellyfin natively on Debian, which was nice because everything just worked with the normal package manager and systemd.

Then, Navidrome wasn’t in the repos, but it’s a simple Go binary and provides a systemd unit file, so that was not so bad just downloading a new binary every now and then.

Then… Immich came… and forced me to use Docker compose… :|

Now I’m looking at Frigate… and it also requires Docker compose… :|

Looking through the docs, looks like Jellyfin, Navidrome, Immich, and Frigate all require/support Docker compose…

At this point, I’m wondering if I should switch everything to Docker compose so I can keep everything straight.

But, how do folks manage this mess? Is there an analogue to apt update, apt upgrade, systemctl restart, journalctl for all these Docker compose apps? Or do I have to individually manage each app? I guess I could write a bash script… but… is this what other people do?

  • UnityDevice@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 hours ago

    I use quadlets instead - it’s part of podman and lets you manage containers as systemd services. Supports automatic image updates and gives you all the benefits of systemd service management. There’s a tool out there that will convert a docker compose config into quadlet unit files giving you a quick start, but I just write them by hand.

  • Chewy@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    15 hours ago

    It won’t save you from doing a bit of work but you could use podman. There’s systemd integration so you can still start/stop/enable your services with systemctl while using docker/container images. You won’t be able to use docker-compose directly, but it’s usually not that hard to replicate the logic with systemd (Immich was a PITA at first (because they had so many microservices split into multiple images, but it improved considerably over the first two years).

    I do this with NixOS quite a bit, and I’ve yet to use docker compose (although the syntax is different, it’s still the same process).

  • Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 hours ago

    Docker compose pull; docker compose down;docker compose up -d

    Pulls an update for the container, stops the container and then restarts it in the background. I’ve been told that you don’t need to bring it down, but I do it so that even if there isn’t an update, it still restarts the container.

    You need to do it in each container’s folder, but it’s pretty easy to set an alias and just walk your running containers, or just script that process for each directory. If you’re smarter than I am, you could get the list from running containers (docker ps), but I didn’t name my service folders the same as the service name.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 hours ago

    Why wouldn’t you just use Docker compose? It has NFS support build in and there are Ansible playbooks for it

    I personally moved to podman since it integrates with systemd but it is a bit harder.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    20 hours ago

    I never, ever use a bare docker run command unless it’s for a one-off, never used it again container. Other than actively working on a project, I can’t see why anyone would use that.

    Docker compose for every stack, watchtower for the containers I’m not too worried about breaking changes on update.

  • suicidaleggroll@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren’t portable, and unless the software is built specifically for the distro you’re running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.

    It’s pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.

    Also you said that native installs just need an apt update && apt upgrade, but that’s not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven’t changed.

    docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just “docker compose down”, rsync the directory to the backup location, then “docker compose up”. To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.

    Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a “down”, “copy”, and “up”, with zero interference with other services running on your system.

    I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.

    • source_of_truth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      Thanks for the post. I agree, docker compose is really good.

      Can you explain the bind mounts part a bit please?

      • suicidaleggroll@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        There are two ways to maintain a persistent data store for Docker containers: bind mounts and docker-managed volumes.

        A Docker managed volume looks like:

        datavolume:/data

        And then later on in the compose file you’ll have

        volumes:
          datavolume:
        

        When you start this container, Docker will create this volume for you in /var/lib/docker/volumes/ and will manage access and permissions. They’re a little easier in that Docker handles permissions for you, but they’re also kind of a PITA because now your compose file and your data are split apart in different locations and you have to spend time tracking down where the hell Docker decided to put the volumes for your service, especially when it comes to backups/migration.

        A bind mount looks like:

        ./datavolume:/data

        When you start this container, if it doesn’t already exist, “datavolume” will be created in the same location as your compose file, and the data will be stored there. This is a little more manual since some containers don’t set up permissions properly and, once the volume is created, you may have to shut down the container and then chown the volume so it can use it, but once up and running it makes things much more convenient, since now all of the data needed by that service is in a directory right next to the compose file (or wherever you decide to put it, since bind mounts let you put the directory anywhere you like).

        Also with Docker-managed volumes, you have to be VERY careful running your docker prune commands, since if you run “docker prune --volumes” and you have any stopped containers, Docker will wipe out all of the persistent data for them. That’s not an issue with bind mounts.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 day ago

    Don’t auto update. Read the release notes before you update things. Sometimes you have to do some things manually to keep from breaking things.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      Autoupdate is fine for personal stuff. Just set a specific date so that you know if something breaks. Rollbacks are easy and very rarely needed.

    • suicidaleggroll@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.

    • zingo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Politically correct of course.

      But from my own experience using Watchtower for over 7 years is that I can count on one hand when it actually broke something. Most of the time it was database related.

      But you can put apps on the watchtower ignore list (looking a you Immich!), which clear that out fairly quick.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        And if you roll all your dockers on ZFS as datasets + sanoid you can just rollback to the last snapshot, if that ever does happen.

  • hoppolito@mander.xyz
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    1 day ago

    But, how do folks manage this mess?

    I generally find it less of a mess to have everything encapsulated in docker deployments for my server setups. Each application has its own environment (i.e. I can treat each container as its own ‘Linux machine’ which has only the stuff installed that’s important) and they can all be interfaced with through the same cli.

    Is there an analogue to apt update, apt upgrade, systemctl restart, journalctl?

    Strictly speaking docker pull <image>, docker compose up, docker restart <container>, and docker logs <container>. But instead of finding direct equivalents to a package manager or system service supervisor, i would suggest reading up on

    1. the docker command line, with its simple docker run command and the (in the beginning) super important docker ps
    2. The concept of Dockerfiles and what exactly they encapsulate - this will really help understand how docker abstracts from single app messiness
    3. docker-compose to find the equivalent of service supervision in the container space

    Applications like immich are multi-item setups which can be made much easier while maintaining flexibility with docker-compose. In this scenario you switch from worrying about updating individual packages, and instead manage ‘compose files’, i.e. clumps of programs that work together to provide a specific service.

    Once you grok the way compose files make that management easier - since they provide the same isolation and management regardless of any outer environment, you have a plethora of tools that make manual maintenance easy (dockge, portainer,…) or, more often, make manual maintenance less necessary through automation (watchtower, ansible, komodo,…).

    I realise this can be daunting in the beginning but it is the exact use case for never having to think about downloading a new Go binary and setting up a manual unit file again.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    2 days ago

    I just use watchtower to update automatically.

    Docker has a logs command.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Each app has a folder and then I have a bash script that runs

    Docker compose up -d 
    

    In each folder of my containers to update them. It is crude and will break something at some stage but meh jellyseer or TickDone being offline for a bit is fine while I debug.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Yeah, I have everything as compose.yaml stacks and those stacks + their config files are in a git repo.

    • village604@adultswim.fan
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      Hmm, I wonder if I can use this on my Synology to manage things until I get around to finishing my proxmox setup.

    • Kyle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Wow thank you for this. This looks so much nicer than portainer.

      Subscribing to these communities is so helpful because of discovery like this.

    • Lka1988@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Same here. Dockge is also developed by the Watchtower dev.

      It’s so much easier to use than Portainer: no weird licensing shit, uses standard Docker locations, and works even with existing stacks. Also helps me keep Docker stacks organized - each compose.yaml lives in it’s own folder under /opt/stacks/.

      I have 4 VMs on my cluster specifically for Docker, each with it’s own Dockge instance, which can be linked together so that any Dockge instance in my cluster can access all Docker stacks over all the VMs.

  • JASN_DE@feddit.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 days ago

    Check out Dockge. It provides a simple yet very usable and useful web UI for managing Docker compose stacks.

    • ook@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Was looking if anyone mentioned it!

      I started with portainer but it was way too complex for my small setup. Dockge works super well, starting, stopping, updating containers in a simple web interface.

      Just updating Dockge itself from inside Dockge does not seem to work but to be fair I didn’t look into it that much yet.

      • mbirth 🇬🇧@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Can Dockge manage/cleanup unused images and containers by now? That’s the only reason I keep using Portainer - because it can show all the other stuff and lets me free up space.

        • Midnight Wolf@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 day ago

          No, not through the dockge UI. You can do it manually with standard docker commands (I have a cron task for this) but if you want to visualize things, dockge won’t do that (yet?).

  • communism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 day ago

    Watchtower for automated updates. For containers that don’t have a latest tag to track, editing the version number manually and then docker compose pull && docker compose up -d is simple enough.

    • kata1yst@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Adding here. Most docker containers support semver pinning! It’s a great balance between automated updates and advoiding breakage.