• 0 Posts
  • 52 Comments
Joined 9 months ago
cake
Cake day: December 14th, 2023

help-circle

  • I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.




  • To add to the other reply, client isolation is about controlling whether an ap, switch, or router willingly sends traffic between clients. Because of that, it doesn’t kick in if you listen to packets over the air before they’ve been received by an AP. For that kind of security you need a wifi specific security measure - which I think “enhanced open” is what you’d be interested in. It allows you to have an open passwordless wifi but it generates temporary encryption keys for each connected client, then the rest is as if it was using WPA, so that you don’t need to enter a password but your traffic gets encrypted and protected from anyone else listening in on the WiFi.

    If you combine both then you should have a network where each device is isolated both over the air and from a routing perspective so that each device only sees an Internet connection and no other devices.


  • The same way filebot and any other tool does - the file needs to have some label, either an absolute episode number or a season + episode number. I’m not aware of any tool that is able to look at the contents of the video to figure out which episode it is visually without any information from the filename - but I’d be happy to be proven wrong because I would be impressed.

    Sonarr/radarr does analyze the content somewhat but that’s just for gathering resolution, codec, HDR, audio languages, and subtitle information, which can all be added to the filename format for inclusion during renaming.


  • I second using sonarr/radarr, once imported it detects episodes and lets you one click rename to a specific format and folder organization.

    If you don’t want any of the other features of sonarr/radarr (like having a way to filter and manage your collection to see what’s in what quality or from what release group, searching multiple indexers with a single search, being able to send a specific search result to a downloader and have it automatically imported and organized when complete, or have auto downloading based on requests using scoring rules that you set), then there’s also filebot which a lot of people seem to like and seems to be just for matching with online metadata and renaming.

    But I haven’t tried filebot since I like the extra features and capabilities of sonarr/radarr. It makes it easy to manage several library folders like an archive for anything that’s been reviewed, is complete, and in a quality/codec that I’m satisfied with, and keeping track of currently airing shows in my active folder which is where I also keep auto downloaded stuff I haven’t reviewed.



  • I use a nuc10i7fnkn and since transcoding is almost entirely done using the dedicated quicksync hardware in the CPU you don’t end up actually using the CPU much. So I’m sure it would work on an older generation or the i5 version. I don’t know much about the N100 but it looks like it would be very capable. Supposedly it boosts to 3+GHz and it’s a 10nm node compared to my NUCs 14nm. But the GPU has the same number of execution units so I’m not sure if the quicksync transcoding performance is that different. I saw someone mention 3 simultaneous 4K transcodes and I think I got about that much on mine. Generally for quick sync performance you just compare the Intel hd or uhd graphics model (like 630, 730, uhd, etc) and the number of execution units and that should correlate to the performance. Also check the Wikipedia page for quicksync for codec compatibility (under the Hardware decoding and encoding section), but anything recent will handle most stuff you’d need: https://en.m.wikipedia.org/wiki/Intel_Quick_Sync_Video


  • I actually run my arrstack on a Synology, it has official support for docker and docker-compose. Granted I do have a higher powered model (the DS1621xs+) but most of the arrstack is fairly low power friendly.

    You can also get away with running Plex on a nas but I would only do it if 1. Your nas has a quick sync supported CPU and you get that enabled properly or 2. You go the direct streaming only / no transcoding setup - which means checking the codec support for all client devices and either only downloading exactly the supported codecs or pre-transcoding everything.

    What I do is actually run Plex/JF on a separate nuc and point it at the nas using a network mount. Just don’t use a network mount for the Plex app database (maybe same applies to JF too), just mount the media files itself. Running Plex and having it access the DB over a network mount is a big no no for various reasons.


  • I’m glad to clear it up! It’s a super powerful tool, and I still occasionally skip the automation and just use it for manual searches since it reduces that process to a single click to search all configured torrent sites and a single click to download and have the rest automatically handled.

    Before when I was visiting friends and wanted to quickly add something to plex, I used to need remote access to my torrent client and separate remote access to my NAS filesystem to move/rename files when downloads finish which was a really manual process. Now all I need is the reverse-proxied sonarr/radarr UI since it handles moving/copying/renaming on download completion - and while the UI isn’t mobile-first, it’s very usable and feels less error-prone than moving/renaming files remotely using a file explorer app.


  • I mean yeah there’s a lot of stuff it does, but you can pick and choose what you want to use it for so it depends on what you would find useful - you don’t have to use the full automation. I started just by using it as a read-only way to see what movies I had and in what qualities and keep things organized. You can use it as a manual interface to do one-off downloads - basically just as an interface to search 5 torrent sites in 1 place where you are still picking exactly what you want it to download. You can use it only to rename files to a consistent format. So there are a lot of ways to use the various features of sonarr/radarr besides automatic downloads. You’re not forced to go all-in and out of the box it doesn’t start automatically downloading until you enable that.

    I think it’s a common misconception that if you use sonarr/radarr you have to use download automation and set up trackers but it’s not the case. It’s a useful library organization tool even if you don’t ever have it download anything.


  • Man that sucks. I must have gotten lucky or something with my setup. I also have trackers go unavailable all the time but I enabled 8 different ones and usually multiple will have the same torrent so it usually has no problem finding something even if 1 or 2 are down. I also don’t VPN tracker searches, just my BitTorrent client so flaresolverr seems to work fine for me (I only have it enabled for 2 of my trackers since most of the ones I use don’t seem to require it).

    If you end up trying it out again I would look into the quality settings and make sure you’re not using the remux quality profile (edit: apparently the default 1080p quality profile has the 1080 remux quality enabled so this might have been the problem). By default most of the quality profiles seem to limit at 100MB/min, so a 2 hr movie shouldn’t allow anything over like 12GB. Whenever I tweak quality or custom formats I refer to trash guides which has a lot of battle-tested rules you can copy. I have my main quality profile set to only download qualities between hdtv720 and br1080 (which is just below remux) with custom formats copied from trash guides set to prefer hevc with surround sound since I have 5.1.



  • I use a Synology nas which has official support for docker / docker-compose to run my arrstack and has n+2 btrfs redundancy. Then for running Plex and jellyfin I use an Intel nuc10i7 with quick sync with the nas media folder mounted over the network but using a direct gigabit link between the 2 so that the traffic stays off my switch.

    I could have gotten away with doing it all on the nas if I forewent ECC in favor of quick sync, but my first priority with my nas is keeping personal artifacts safe so I went with ECC.



  • This is super neat even though it’s basically just running the main board plugged into peripherals - I already knew the main board was pretty small but it’s still surprising seeing it sitting on a table.

    Makes me wonder what you could do with a case the size of a wii (or even a mini wii). Plenty of space for cooling, a 2.5" SSD bay, and I’m sure you could fit lots of other goodies in it too.



  • Edit: of course the below only applies to chrome and possibly chrome derivatives - FF is keeping MV2

    It’ll make it a lot more likely that YouTube ads will get through because MV3 limits the block list size to a fraction of the size normally used by uBO and also disallows external/live updates to the block list, instead forcing the rules to be baked into the extension. Meaning an update to the blocking rules could take a week of extension review time to go through. I heard that the YouTube ad blocking rules can update multiple times a day so this would easily allow Google to update their ad code before approving updates to ad blockers, allowing them to always stay ahead.

    So it might not outright break it, but some rules will have to be left off so it seems like it’ll be a dice roll if you get an ad where the blocking rule had to be left off to fit Google’s block list limit or the rule you have is stale because it took a couple weeks for the extension update to be approved on the extension store.

    The feature of MV3 that enables these changes is that in MV3, the extension is handing over the complete blocklist to chrome, which does the blocking and gets to put limits on the blocklist. In MV2, the extension is given a direct hook to do the blocking itself, so it can have an unlimited block list size and can source the blocklist from anywhere. Think of it kind of like the difference between letting a graduation speaker speak off the cuff vs the school reviewing the speech beforehand and having their finger on the mic switch in case you wander off script. So the new system technically can be more secure and performant because the blocklist is reviewed as part of the extension and because poorly written blocker code can’t slow you down (only Google’s optimized logic is allowed to run) but it only works if they don’t impose limits lower than what effective ad blockers need (ie updating frequently like daily and allowing a large blocklist). Plus uBo is written really well for resource usage so it’s getting crippled even though it’s a shining example of an effective ad blocker.

    Plus there are even more limitations like certain types of advanced rules that all I understand is just needed for certain sites that are tricky., but those rules aren’t supported in MV3. The uBo GitHub wiki has some information about this: https://github.com/uBlockOrigin/uBOL-home/wiki/Frequently-asked-questions-(FAQ)#filtering-capabilities-which-cant-be-ported-to-mv3