Client availability is valid. I use an android tv, that’s been easy for me. There are mobile clients for every phone and tablet.
I’ve never used Plex. What are some of the features that you’re missing in Jellyfin? Genuinely curious.
Emma Finestone is the curator for this 2.6 milion year stone tools collection :)
It’s really popular in the server world, and it’s the foundation of many other distros, maybe that’s why?
Some more info, like some error messages or logs would help people help you.
Also, stick with one distro while troubleshooting, and start by giving us the distro used, kernel, nvidia drivers, steam, wine and proton versions / variants, other packages used…
When you switched distro, did you do a full clean install for everything, including steam?
No Bias, No Bull AI I’ve spent my career grappling with bias. As an executive at Meta overseeing news and fact-checking, I saw how algorithms and AI systems shape what billions of people see and believe. As a journalist at CNN, I even hosted a show briefly called “No Bias, No Bull”(easier said than done, as it turned out). Trump’s executive order on “woke AI” has reignited debate around bias and AI. The implication was clear: AI systems aren’t just tools, they’re new media institutions, and the people behind them can shape public opinion as much as any newsroom ever did. But for me, the real concern isn’t whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from. Political bias misses the deeper issue: transparency. We rarely see which sources shaped an answer, and when links do appear, most people ignore them. An AI answer about the economy, healthcare, or politics, sounds authoritative. Even when sources are provided, they’re often just footnotes while the AI presents itself as the expert. Users trust the AI’s synthesis without engaging sources, whether the material came from a peer-reviewed study or a Reddit thread. And the stakes are rising. News-focused interactions with ChatGPT surged 212% between January 2024 and May 2025, while 69% of news searches now end without clicking to the original claiming neutrality while harboring clear bias. We’re making the same mistake with AI, accepting its conclusions without understanding their origins or how sources shaped the final answer. The solution isn’t eliminating bias (impossible), but making it visible. Restoring trust requires acknowledging everyone has perspective, and pretending otherwise destroys credibility. AI offers a chance to rebuild trust through transparency, not by claiming neutrality, but by showing its work. What if AI didn’t just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: “A 2024 MIT study funded by the National Science Foundation…” or “How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers…”. Even this basic sourcing adds essential context. Some models have made progress on attribution, but we need audit trails that show us where the words came from, and how they shaped the answer. When anyone can sound authoritative, radical transparency isn’t just ethical, it’s the principle that should guide how we build these tools. What would make you click on AI sources instead of just trusting the summary? Full transparency: I’m developing a project focused precisely on this challenge– building transparency and attribution into AI-generated content. Love your thoughts.
- Campbell Brown.
That looks really unmaintained, the last update was yeas ago. I wouldn’t just run random shell scripts from the Internet without understanding what they do first.
Here’s a pretty decent Wiki to get you started with the *arr ecosystem: https://wiki.servarr.com/
My “servers” are headless, in the basement, so even if I’m home, it’s still remote :D
It’s always good to read the docs, but I often skip them myself :)
They have this nifty tool called pve8to9
that you could run before upgrading, to check if everything is healthy.
I have a 3 node cluster, so I usually migrate my VMs to a different node and do my maintenance then, with minimal risks.
This was my starting up machine. Of course, an nvme makes sense, especially running windows on it. I went for Proxmox, and now I have 4 different machines, a cluster of 3 similar sffs, and a chunkier boi with an i7, 64gb ram and a quadro gpu. This one was the most expensive, around 250€.
Beware, this is how it starts. From a single machine in my office, I went to a mini Datacenter in my cellar, with 4 “servers” (micro-pcs), two Nas devices, a raspberry pi cluster, a dell wyse cluster, new switches and access points, and so much more :))
you can get away with half that. i run my setup (similar to what you wrote) on a dell micro sff with an i5 6500t and 16gb ram that i paid 90€ for. not the snappiest, but works just fine.
I don’t use any GUI… I use terraform in the terminal or via CI/CD. There is an API and also a Terraform provider for Proxmox, and I can use that, together with Ansible and shell scripts to manage VMs, but I was looking for k8s support.
Again, it works fine for small environments, with a bit of manual work and human intervention, but for larger ones, I need a bit more. I moved away from a few VMs acting as k8s nodes, to k8s as a service (at work).
I do the same in Proxmox VMs, in my homelab, which is… fine. I was talking more about native support, manageable via an API or something.
Say I need to increase the number of nodes in my cluster. I spin up a new VM using the template I have, adjust the network configuration, update the packages, add it to the cluster. Oh, maybe I should also do an update on all of them while I’m there, because now the new machine runs a different docker version. I have some Ansible and bash scripts that automates most of this. It works for my homelab.
At work however, I have a handful of clusters, with dozens of nodes. The method above can become tedious fast and it’s prone to human errors. We use external Kubernetes as a service platforms (think DOKS, EKS, etc), who have Terraform providers available. So I open my Terraform config and increase the number of nodes in one of my pre-production clusters from 9 to 11. I also change the version from 1.32 to 1.33. I then push my changes to a new merge request, my Gitlab CI spins up, who calls Atlantis to run a terraform plan
, I check the results and ask it to apply. It takes 2 minutes. I would love to see this work with Proxmox.
Man, I’ve been living and working in Germany for close to 10 years now. Proxmox is like that 50yo colleague of mine. Hard worker, reliable, really knowledgeable, a treasure trove of info, but he can’t be budged. He insists on installing any new VM using the GUI (both Windows and Linux), he avoids learning “new things” like Docker or Kubernetes, and really distrusts “the cloud”.
I will keep using Proxmox, as I have for many years both at work and at home, but we are migrating from a VM (with Docker) setup to Kubernetes. It would have been great for Proxmox to offer some support there, but…
I see what you mean, interesting. Didn’t really look at NixOS as a server os. I personally prefer using multiple compose files (in the process of migrating to k8s). I share resources too, like in your example, I just point to the existing DB instance, not create a new one for each new service.
May I ask what you mean by NixOS support? There’s a docker compose you could use in their repo…
I believe R-- stands for Readarr and G–R-- stands for GoodReads.
This is what I do. Shared folder via NFS, mounted inside the VM (fstab), added to the volumes of the docker container in the compose file…