G’day,
tl;dr - have two unraid servers in geographically independant locations, want to use them as duplicate / redundancy storage of some shares.
- Unraid server at home, 120TB of storage but only need to sync ~10TB total which is spread over 3 shares.
- Bare metal Ubuntu server at work, 12TB of storage, but only need to sync ~6TB total over the equivalent of a single share.
- Have a second Unraid server with 26TB of storage I plan on taking to work. I want to backup my ~10TB from home to work, and my ~6TB from work to home.
Currently have Crashplan running on both ends which keeps up fine with the work data size, but will take literally years to upload home volume as it is so dang slow (~3Mbps, constantly stopping to rescan millions of files) so want something else in place ASAP. Will leave Crashplan running too. It’ll catch up eventually.
Home has 400Mbps upload, work has 100Mbps upload so speed shouldn’t be the issue.
Is Syncthing the answer? Was thinking of doing a read-only share on the sending end.
I use SyncThing and it works great for my use case but I think it is not the recommended option for backups.
If you are running zfs on both servers you could look at sanoid.
I’d give Syncthing a try. Though you should make some kind of tunnel so that they can communicate without relays, the speed there really depends on what traffic the relay is going through.
You could have a look at rclone.
https://forums.unraid.net/topic/51633-plugin-rclone/
Sound like you want the sync command.
rclone sync - Make source and dest identical, modifying destination only.
This was going to be exactly my suggestion
Just to point out its utility, I use rclone at home to sync my data elsewhere, and I use it extensively at work for internal systems and client systems.
Sync for replicating one site to another, bisync for bidirectional. Extremely flexible.
Bare metal Ubuntu
Just a nitpick but bare metal means no OS.
Wait what? Pretty sure it means installed directly on the hardware, as opposed to virtualized
So I did some digging, and it’s less clear than I thought.
I come from the embedded development world, where bare metal has been in use for a long time. Pre-2000s, this is exclusively what the term meant. Sometime around the mid 2000s, the virtualization services starting coopting the term.
So I guess it does indeed mean both, but being a stickler for tradition it doesn’t sit right with me. The term just makes more sense when you’re applying it to the hardware; bare. No middleman, and that includes an OS.