

This won’t work, your wan ip isn’t dynamic, it’s on the ISP NAT network and your resulting ip to public services is shared across many customers. CG-NAT.
This won’t work, your wan ip isn’t dynamic, it’s on the ISP NAT network and your resulting ip to public services is shared across many customers. CG-NAT.
I don’t know where you work but don’t access your tailnet from a work device and ideally not their network.
Speaking to roku, you could buy a cheap raspberri pi and usb network port. One port to the network the other to roku. The pi can have a tailscale advertised network to the roku, and the roku probably needs nothing since everything is upstream including private tailscale 100.x.y.z networks which will be captured by your device in the middle raspberri pi.
I guess that’d cost like 40 ish dollars one time.
Right up there battling broadcom for worst.
Australia just hopes the countries who handle the waste from the uranium we sell don’t you know make nuclear weapons with it. You know they’re good allies they wouldn’t do anything with that right? They certainly would never enrich the original…
If Microsoft named it, it’s temporary. Let’s check intune no wait endpoint manager, no wait, intune again. First I’ll just make sure my login works in azure ad, no wait entra, no wait it’ll be copilot by the end of the year. Not to be confused with copilot (office) copilot (github) or copilot (azure) or power platform no wait copilot?
Would this not be the Linux subsystem on window? LSW
I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.
I can’t see why regular file would be any different.
I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.
I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.
I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.
I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.
3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
Point is, it’s still capable today.
Ps if you want to be scared, look up showdan. Maybe look at showdan YouTube examples.
I trust Steve Gibson and I have listened to his podcast for years. Without trust, his tool does no more than what any other internet citizen can do, so even if you don’t trust it, it’s a trial by gentle fire.
Personally I recommend listening to security now podcast. But then again I’ve worked in IT for 15 years so it’s kind of my thing.
I thought this was an onion article.
Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
Two pihole servers, one n VM vlan, one on device VLAN with OpnSense delivering them both via DHCP options. I sometimes update lists, like yearly… At best. They’ve been there over 7 years. Calling them robust is correct. The hypervisors are 3 proxmox servers in cluster using ceph. Intrl NUC 3rd Gen. Less than 80w combined with all vms. Also 8 years old no failures but tolerant for it.
Probably because it uses nothing online, including the voice to text. It’s only local device. A rare claim for those kinds of features.
My brothers overpriced merc uses lighting zones and detection to turn off areas to not blind incoming traffic. Cool, but I’m sure within 5 years these extremely complex lighting arrays will fail and not be user serviceable, other than full headlight cluster replacement for $4k.
More complexity, shorter life. You’ll get what you want but only because it suits the makers.
Pop! Os
Imo.
I spent like 20 minutes self hosting and running over tailscale so traffic is always private… Never had an issue. I’ve got over 20 devices accessible on it.
Easy to remote register over ssh just by sending the installer plus running with server name plus key, then setting a static password.
I still think gaming wide moonlight is great though. You won’t really regret that.
If dns resolved then it’s not blocked. You need to look at your network.
Bypass dns connect to the ip and port. What happens?