🇨🇦

  • 13 Posts
  • 577 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • :/ shit.

    I’m pretty sure I saw this a few months ago and moved to the beatkind/watchtower fork, but it’s not been updated in 6mo either. (Devs only been active in private repos; so they’re still around, just not actively working on watchtower)

    Guess I’ll find another solution. Hell, I might just put my own script on crontab. Looping through folders running docker compose down/pull/up isn’t too hard really.






  • A bit of redundancy is key.

    I have my primary DNS, pihole, running on an RPI that’s dedicated to it; as well as a second backup version running in a docker container on my main server machine.

    Nebula-Sync keeps the two synchronized with eachother, so if a change is made on one, it automatically syncs to the other. (things like local dns records or changes to blocklists).

    If either one goes down (dead sd cards, me playing with things, power surges, whatever); the other picks up the slack until I fix the broken one, which is usually little more than re-install, then manually sync them using piholes ‘teleporter’ settings. Worse case, restore a backup (That you’re definitely taking. Regularly. Right?)

    Both piholes use Cloudflared (here’s their guide *edit: I see I’ll have to find a new method for this… Just going to pin the containers to tag ‘2025.11.1’ for now) to translate ALL dns traffic into DOH traffic, encrypting it and using the provider of my choice, instead of my ISP or any other plain DNS. The router hands out both local DNS IPs with DHCP because Port 53 outbound (regular dns) is blocked at the router, so all LAN devices MUST use the local DNS or their own DOH config. Plain DNS won’t make it out.

    DNS adblocking isn’t perfect, but it’s a really nice tool to have. Then having an internal DNS to resolve names for local-only services is super handy. Most of my subdomains are only used internally, so pihole handles those DNS records, while external DNS only has the records for publicly accessible things.


  • I have the same issue with Immich on android. It pretty much never uploads files until I manually open the app; then the app refuses to acknowledge it has uploaded those new files until it’s closed and re-opened :( (power saving is set to un-restricted in android, and background data usage is allowed. I’ve been through troubleshooting very thoroughly, it just doesn’t work)

    FolderSync has been the only reliable (non-root) backup solution I’ve used. It’s set to monitor my image folders for changes and upload any new files as soon as they’re created; this works ~85% of the time. Then, It’s also set with a few schedules to check for changes every 3hrs, backing up everything on the phone the app can access; this catches anything the on-change/on-creation file detection misses, while also backing up more data than just my images. I have yet to see that fail after ~3 years.






  • I only bring it up because you explicitly said you have no idea why it doesn’t work.

    Take things at a comfortable pace; there’s no sense overwhelming yourself. Then you just forget what you’ve done and end up lost in your own maze.

    I started with Plex myself, almost 10 years ago. Moved to Emby, where I learned about buying a domain, setting up ssl through a reverse proxy, and just continued to explore from there. Today I run ~26 containers/projects across three systems and I’m always keeping my eye out for interesting new things.

    Best of luck with your journey m8.


  • Sounds like you’re behind cgNAT, which essentially means there’s another router owned by your ISP that’s between yours and the open internet, which also requires port forwarding, but your ISP will never do that for you.

    It complicates things, but the solution(s) are tools like tailscale, cloudflare Tunnels, or to rent a VPS just to host a proxy/vpn.

    Plex solves this by using their own public servers as a proxy for you, but this is part of how they have control over your users/server/data, such as blocking remote streaming… That makes more than a few people uncomfortable.



  • Plex has an automatic proxy service hosted by their public servers. If you haven’t or can’t configure port forwarding correctly, plex will route the connection through their own servers.

    The problem is, that also means Plex co has total control over your server and the data sent between it and clients if they so choose. Anything from quietly logging the data sent back and fourth, to controlling who can connect and what they can do while they are.

    Jellyfin has to be correctly exposed to the internet via port forwarding or tools like tailscale/a vpn; but it’s entirely your server under your control. You have ultimate control over how your server can be accessed, but that also means you’re responsible for actually setting that up.





  • Yeah; I mean, if this was any other content from the same shows/movies it’d be a non-issue covered under Fair Use.

    I can understand being upset about entirely new content, AI deepfakes for example; but this content was created and distributed to the public, intentionally, with the consent of the individuals that are filmed within it. It’s just been transformed into a different format; arguably, in a creative and educational manner. (the same way something like a ‘Family Guy funny moments’ compilation is)

    If you didn’t want people looking at your nude body, why did you perform nude scenes in front of a camera, knowing it’d be distributed to the public…