🇨🇦

  • 13 Posts
  • 552 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle






  • This comment prompted me to look a little deeper at this. I looked at the history for each show where I’ve had failed downloads from those groups.

    For SuccessfulCrab; any time a release has come from a torrent tracker (I only have free public torrent trackers) it’s been garbage. I have however had a number of perfectly fine downloads with that group label, whenever retrieved from NZBgeek. I’ve narrowed that filter to block the string ‘SuccessfulCrab’ on all torrent trackers, but allow NBZs. Perhaps there’s an impersonator trying to smear them or something, idk.

    ELiTE on the other hand, I’ve only got history of grabbing their torrents and every one of them was trash. That’s going to stay blocked everywhere.


    The block potentially dangerous setting is interesting, but what exactly is it looking for? The torrent client is already set to not download file types I don’t want, so will it recognize and remove torrents that are empty? (everything’s marked ‘do not download’) I’m having a hard time finding documentation for that.






  • To be perfectly honest, auto updates aren’t really necessary; I’m just lazy and like automation. One less thing I’ve gotta remember to do regularly.

    I find it kind of fun to discover and explore new features on my own as they appear. If I need documentation, it’s (usually…) there, but I’d rather just explore. There are a few projects where I’m avidly following the forums/git pages so I’m at least aware of certain upcoming features, others update whenever they feel like it and I’ll see what’s new next time I happen to be messing with them.

    Watchtower notifies me whenever it updates something so I’ve at least got a history log.


  • I’ve had Immich auto updating alongside around 36 other docker containers for at least a year now. I’ve very very rarely had issues, and just attach specific version tags to the things that have caused problems. Redis and postgres for example in both Immich and Paperless-NGX have fixed version tags because they take manual work to upgrade the old databases. The main projects though, have always auto updated just fine for me.

    The reason I don’t really worry about it: Solid backups.

    BorgBackup runs in the early AM, shortly before Watchtower updates almost all of my containers, making a backup of the entire system (not including bulk storage) first.

    If I was to get up in the morning and find a service isn’t responding (Uptime-kuma notifies me via email if it can’t reach any container or service), I’ll mess with it and try to get the update working (I’ve only actually had to do this once so far, the rest has updated smoothly). Failing that, I can just extract yesterday’s data from the most recent backup and restore a previous version.

    Because of Borgs compression and de-duplication, concurrent backups of the same system can be stored in an absurdly small amount of space. I currently have 22 backups of ~532gb each, going back a full year. They are stored in 474gb of disc space. Raw, that’d be 11.8TB





  • Major version changes for any software from the OS right down to a simple notepad app should update as sequentially as possible (11>12>13>etc). Skipping over versions is just asking for trouble, as it’s rarely tested throughly.

    It might work, but why risk it.

    An example: if 12 makes a big database change but you skip over that version, 13 may not recognize the databases left by 11 because 12 had the code to recognize and reformat the old database while that code was seen as unnecessary and removed from 13.

    Stuff like this is also why you can’t always revert to an older version while keeping the data/databases from the newer software.




  • Typical piracy requires you to search sources/indexers yourself, decide on the best search result for what you’re trying to download, pass that to your download client, then manually name and sort the downloaded files into media folders once the download completes.

    The arr’s automate this entre process for several media types (movies, tv, music, etc), combining search results from dozens of indexers to make its decision on what to download.

    Now, I open a webpage, search for a movie/show (results from imdb) and select an item I want to watch. ~15min later, that item has been found, downloaded, and sorted into my media folders where Emby/Jellyfin can display it to myself or friends.

    Add on to this with Ombi, a requests platform that allows my friends+family to request media and have the arrs automatically grab it. Since setting that up a little over a year ago, it’s filled almost 400 requests (not including media I’ve grabbed/requested myself) without me having to manually manage requests ever.

    Ontop of grabbing media on request, the arr’s also monitor the sources you’ve configured, watching for new uploads, and grabbing content that’s missing from your library but monitored for, such as: newly aired episodes, media that couldn’t be found earlier, or upgrades in quality for existing media (if configured/allowed to upgrade existing media).

    Every time a new episode airs for a show I’ve added, it automatically grabs it for me. (currently 486 series monitored here)