I’m going round in circles on this one.

What I want to do is:

  • serve up my self-hosted apps with https (to local clients only - nothing over the open web)
  • address them as ‘app.server.lan’ or ‘sever.lan/app’
  • preferably host whatever is needed in docker

I think this is achievable with a reverse proxy, some kind of DNS server and self-signed certs. I’m not a complete noob but my knowledge in this area is lacking. I’ve done a fair bit of research but I’m probably not using the right terminology or whatever.

Would anyone have a link to a good guide that covers this?

  • Willdrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    I recently finished something like this at home, npmplus+pihole. I’ll never do it again, and the moment it breaks I’ll go back to just using Tailscale’s MagicDNS

  • kossa@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 hours ago

    Lots of people recommending a proper domain, I would as well (way easier)

    Just, if you want to go the complete “independent” route: either make sure all the clients you plan to use can just accept self-signded certs and skip validation or you need to create your own CA and import those into your clients.

    Depending on which clients you plan on using that might be impossible (e.g. for some IoT devices, some Smart TVs and such).

    That is why having an proper domain and use LetsEncrypt, ZeroSSL et. al. is way easier.

  • philpo@feddit.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    20 hours ago

    It is absolutly possible, but oersonally I would highly recommend getting yourself a proper public domain for that,even if you won’t use it otherwise (it’s even somewhat saver if you use a designated one for it).

    To make it really easy get the domain with someome who also provides DNS with it (Hetzner is a solid choice, so are others, has to have an API). (E.g. “mydomain.casa”.)

    Now get an internal DNS server that can handle it’s own zones. I always recommend technitium, but there are other choices. Pihole is not a good choice here.

    Next thing is a reverse proxy,as you mentioned. If you want it easy, NginxProxyManager is a good choice, but limits what one can do later. But it kind of works out of the box. Traefik and caddy are both often named,but I found none of them as “fire and forget” as NPM is - and caddy can’t do a lot of things either. Traefik is what I currently use,but even using Manatrae or similar GUIs it’s sometimes a pain. But it’s absolutely powerful especially when you run a lot of docker container on the same host. Tbh, if I had not some special requirements I would still use NPM.

    Now, what to do? (Not a full manual, more like a ovrview that it’s not that complicated)

    1. Install all of the above on docker.
    2. Setup NPM with a wildcard certificate, register with zerossl.com (has advantages over LetsEncrypt), add them as a provider and get a wildcard(!) certificate. (*.yourdomain.casa).
    3. Setup a proxy host. You simply add the domainname (nextcloud.mydomain.casa),point it to the actual container ("192.168.1.10:3000) and choose the wildcard certificate as a SSL and switch on “force SSL”.
    4. Go to the DNS server, create a DNS zone “mydomain.casa” and then simply add “nextcloud.mydomain.casa” and point it to the Reverse proxy IP. Done.

    For good practice I would recommend to also keep a zone that links directly to the services so you can use that whenever necessary. (mydomain.internal)

    • hietsu@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      Umm, wildcard certs from ZeroSSL seem to run at $52.99 per month, billed yearly. Free plan does not have those, neither does Basic.

      • philpo@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 hours ago

        Sorry, then proceed with LE. Got that part mixed up, you are totally rjght.

  • coffeeboba@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    16 hours ago

    Pasting a comment I made in another similar thread:

    I use a reverse proxy (caddy), and point a domain at my machine.ts-domain.ts.net which hosts caddy

    this way I can go to service.my.domain instead of machine:port as long as I’m connected to tailscale. any devices not on my tailscale network just get bounced if they hit the domain

  • glitching@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    9 hours ago

    Imma be the problemXY guy here - ditch the https part. without it, you don’t gotta deal with certs, signing, shit that’s outside your LAN, etc. it’s your LAN, do you really need that level of security? who’s gonna sniff packets and shit on your LAN?

    now all you need is pihole where you set up your hostnames (jellyfin.lan, nextcloud.lan, etc.) and nginx proxy that maps e.g. jellyfin.lan to 192.168.0.123:8096. both of them run plenty fine in docker.

    • Willdrick@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      You say that, but I’ve seen so many dodgy iot devices… Specially deploying PiHole you start to see so much random traffic from stupid stuff like a smartplug or a TV box

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        If you’re on the same subnet, no amount of reverse proxy will help with dodgy apps. It’s more appropriate to put the dodgy iot in a DMZ to control what they can do.

        Putting https on these is fine, but it’s not a solution to isolating bad clients.

    • jonno@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      This is the way! Also without having to open a port, you can use dns over tls. Also use duckdns or any other free dynamic dns provider.

  • Elvith Ma'for@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    23 hours ago

    I have this setup. I bought a domain (say homeserver.tld) from a registrar that allows zone edits with an API. Then I use certbot with a plugin that supports my registrar to get real Let’s Encrypt certificates. Usually Let’s encrypt connects to your server to ensure that it responds to the domain you’re requesting a certificate for, but this challenge can also be done by editing the DNS record of your domain to prove ownership. That is called DNS-01 challenge and is useful of your domain is not publicly reachable. Google for certbot DNS-01 your registrar to find some documentation.

    Some of the VMs/LXC now get certificates for a specific subdomain (“some-app.homeserver.tld”), other just get a wildcard certificate (“*.homeserver.tld”) - e.g. my docker host.

    • robador51@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      22 hours ago

      I do the same. I have a real domain and certbot does a dns challenge. It was a little fiddly and took a moment to figure out, i think that was because i couldn’t gat caddy to work, but traefik worked a charm. Self signing is more complex i think because you’ll need to accept the root in every client (browsers especially?), which is even more fiddly.

      • Elvith Ma'for@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Yeah, that’s exactly why I didn’t use my own CA. There’s a plethora of devices that you now need to import the CA to and then you need to hope, that every application uses the system cert store and doesn’t roll its own (IIRC e.g. Firefox uses its own cert store and doesn’t use the system cert store. Same for every java based application,…)

        It’s fiddly with Caddy, as you need a specific plugin to get it to work with anything else than the default challenge. That means using a custom build via caddy - and with docker, you’re SOL. BUT you can just use certbot and point caddy to the cert file in your file system.

  • Funky_Beak@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 hours ago

    Personal solution:

    • Openssl certs (lots of youtube videos on best practice there).
    • nginx reverse proxy manager
    • adguard home using the dns rewrite pointing to the wildcard domain.

    This is enough i find for intranet use. You can get fancy and put it over a wireguard or tailscale network too.

  • borax7385@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 hours ago
    1. Point the hostname of your service to the IP of the proxy in the DNS.

    2. For the certs you need an internal CA. I use Step CA which has ACME support so the proxy can get certificates easily.

    3. Add the root CA certificate to your computer certificate trust store.

    4. Profit!!

  • Scrath@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    23 hours ago

    I have a pretty similar setup currently running but I bought a public domain that I use for my certificates.

    I used to have a pi-hole as my DNS server where I entered all subdomains and pointed them at the right address, namely my reverse-proxy.

    My reverse-proxy, Nginx Proxy Manager, got the certificates from my domain registrar and forwarded the requests to the correct services based on subdomain.

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    23 hours ago

    It’s probably just easier to use public certs with DNS verification than building and distributing your own certs.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 hours ago

    I don’t know of an all-in-one-place guide but there’s not a whole lot to it. Just look up how to do each of the parts you mentioned. I’d say that buying a domain and using LetsEncrypt is not really in the self-hosting spirit (i.e. you should run your own DNS and CA) but it’s up to you. Running a serious CA with real security is quite hard, but for your purposes you can just do whatever. There are various programs or scripts for it. I still use CA.pl from the openssl distro, but that’s very old school and people here hate it. Anyway, you will do a little head scratching to get everything working right, but it will be educational, so you’ll get something out of it in its own right.

  • timuchan@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    I think you could achieve this with largely the same method as typical when using Nginx, Caddy, etc.

    The main difference is that where you’d usually use ACME/Let’s Encrypt - you’ll likely need to generate your own certs using a took like mkcert. You’ll need to get the CA cert used to generate the SSL certs and install it on any other systems/browsers that will be accessing the apps over https (mkcert will install them for the system you generate from).