

Yeah I posted my comment just before everything broke all over again lol. Got it working again but yeah always have to spend so much time fixing this shit thanks to Google.
Yeah I posted my comment just before everything broke all over again lol. Got it working again but yeah always have to spend so much time fixing this shit thanks to Google.
It works, but requires a shit ton of time to constantly stay on top of fixing Play Integrity.
Best we can do is silently block your messages if you fail Play Integrity.
I dunno, but I would think BSD shares a lot of the same ethos.
He posts a lot of his art on Lemmy.
If I have rate limiting set up (through crowdsec) to prevent bots from scanning / crawling my server, should I be as worried?
I think I remember being able to switch to anonymous user with it.
DV is difficult to get working properly on PC, and last time I tried to set up an HTPC I ran into tons of remote control issues and it wasn’t simple enough that I could just hand the remote over to a guest (or my spouse).
2019 Shield has plenty of issues sure, but it still seems like the best option for me, personally.
Agree about disable network on the TV itself.
SmartTube
You can connect Jellyfin to an SSO provider. It still needs work, and client support is lacking. Ideally I think it maybe should be built in rather than a plug-in (would definitely encourage more client support). But it exists.
https://github.com/9p4/jellyfin-plugin-sso
Feature request for oidc/sso:
https://features.jellyfin.org/posts/230/support-for-oidc-oauth-sso
As it stands, you could enable both the SSO and LDAP plugins, and let users do password resets entirely through your auth provider.
Basically, this is all stuff that comes with Plex out-of-the-box, but you sort of have to glue it together yourself with Jellyfin, and it’s not yet in an ideal state. Plex is much much easier to configure. I wouldn’t allow yourself to believe that Plex doing all this for you will make you totally secure through – there’s been multiple incidents with their auth, and IIRC the LastPass attacker pivoted from a weak Plex install. Just food for thought.
I see a lot of comments in here against the cloud and saying that on-prem is better. My question is, why would on-prem uptime would be any better? Or is it more about a loss of control in moving to the cloud?
I see. The word lie is strong, and it’s entirely within the realm of possibility that you never had any issues arise with your install. I see your point, and apologize for perhaps a bit of grandstanding on my behalf. I was more focused on the pros/cons of different types of distros, and missed the reason why you were acting defensively.
I feel this kind of conversation still isn’t super helpful though (for either of you). I mean it clearly can be true that one person (or one chunk of the community) has no issues, while another person (and maybe another good chunk of the community) does have issues. Though perhaps in getting involved, I haven’t really helped either.
I’ve had my own issues with two different laptops over the years, and in that time I’ve seen multiple packaging/dependency issues hit a majority of Arch users. My own issues are often caused by bugs on the bleeding edge that users on a non-rolling distro dodge altogether. For me these have mostly been easy to resolve, but it’s a much different experience compared with “stable” distros, where similar changes that require manual intervention (ideally) happen at a predictable cadence, and are well-documented in release notes.
I still strongly prefer Arch, as I’ve hit showstoppers and annoyances with “stable” distros as well. I guess I’m saying I don’t really understand your responses, and why you seem so critical of user anecdotes in this space, when your original comment was a (perfectly fine) anecdote about how everything’s working for you. That’s great! But we can also point to many examples caused directly by bugs or dependency issues that only crop up in a rolling release. Taking all these data together, good and bad, pros and cons, working and not working, can help us learn and form a more complete picture of reality.
What you’ve said is true, though it’s a bit of a trade-off – over the years I’ve wasted so many hours with those “user friendly” distros because I need a newer version of a dependency, or I need to install something that isn’t in the repos. Worst case I have to figure out how to compile it myself.
It’s very rare to find something that isn’t in the Arch official repos or the AUR. Personally I’ve found that being on the bleeding edge tends to save me time in the long run, as there’s almost no barriers to getting the packages that I need.
Right, but if you want a digital video library that hasn’t been compressed to hell by some streaming company then your only option is using Blu-ray as a source.
I really hope they do…
I love my Nvidia Shield, but it’s definitely aging, and sometimes getting it to actually play 2160p Blu Ray remuxes without stuttering is a chore. Plus Dolby Vision does not even display properly due to “red push” issue, and Nvidia has no plans to fix (they have abandoned the device and the entire market segment).
Currently the only method to get a streaming box to actually display Dolby Vision properly (profile 7 FEL) involves installing Linux (CoreELEC), and I believe the only device with all the proper support (licensing, hardware, etc) is the Ugoos Am6b+.
I much prefer the Jellyfin android client to Kodi, so I’ve been sticking with the Shield for now. I’d love another Linux based competitor, and hopefully a more polished streaming box from Valve could spur some development of better clients and tools.
I am a bit nervous about Valve actually being able to get all the licensing in place to pull this off.
When I think about how many hours of my life I’ve wasted and how much room in my brain is dedicated to all these stupid modern formats…my hope is that a player like Valve entering the market could do some good work. We are in a very sorry state when it comes to compatibility.
Though again… While I don’t have a deep understanding of the issues, it seems like a large chunk of it revolves around licensing, and I don’t know how much of a dent Valve can make in that.
Yes, what you’re saying is the idea, and why I went with this setup.
I am running raidz2 on all my arrays, so I can pull any 2 disks from an array and my data is still there.
Currently I have 3 arrays of 8 disks each, organized into a single pool.
You can set similar up with any raid system, but so far Truenas has been rock solid and intuitive to me. My gripes are mostly around the (long) journey to “just Docker” for services. The parts of the UI / system that deals with storage seems to have a high focus on reliability / durability.
Latest version of Truenas supports Docker as “apps” where you can input all config through the UI. I prefer editing the config as yaml, so the only “app” I installed is Dockge. It lets me add Docker compose stacks, so I edit the compose files and run everything through Dockge. Useful as most arrs have example Docker compose files.
For hardware I went with just an off-the-shelf desktop motherboard, and a case with 8 hot swap bays. I also have an HBA expansion card connected via PCI, with two additional 8 bay enclosures on the backplane. You can start with what you need now (just the single case/drive bays), and expand later (raidz expansion makes this easier, since it’s now possible to add disks to an existing array).
If I was going to start over, I might consider a proper rack with a disk tray enclosure.
You do want a good amount of RAM for zfs.
For boot, I recommend a mirror at least two of the cheapest SSD you can find each in an enclosure connected via USB. Boot doesn’t need to be that fast. Do not use thumb drives unless you’re fine with replacing them every few months.
For docker services, I recommend a mirror of two reasonable size SSDs. Jellyfin/Plex in particular benefit from an SSD for loading metadata. And back up the entire services partition (dataset) to your pool regularly. If you don’t splurge for a mirror, at least do the backups. (Can you tell who previously had the single SSD running all of his services fail on him?)
For torrents I am considering a cache SSD that will simply exist for incoming, incomplete torrents. They will get moved to the pool upon completion. This reduces fragmentation in the pool, since ZFS cannot defragment. Currently I’m using the services mirror SSDs for that purpose. This is really a long-term concern. I’ve run my pool for almost 10 years now, and most of the time wrote incomplete torrents directly to the pool. Performance still seems fine.
Thank you for your service.