G’day,

tl;dr - have two unraid servers in geographically independant locations, want to use them as duplicate / redundancy storage of some shares.

  • Unraid server at home, 120TB of storage but only need to sync ~10TB total which is spread over 3 shares.
  • Bare metal Ubuntu server at work, 12TB of storage, but only need to sync ~6TB total over the equivalent of a single share.
  • Have a second Unraid server with 26TB of storage I plan on taking to work. I want to backup my ~10TB from home to work, and my ~6TB from work to home.

Currently have Crashplan running on both ends which keeps up fine with the work data size, but will take literally years to upload home volume as it is so dang slow (~3Mbps, constantly stopping to rescan millions of files) so want something else in place ASAP. Will leave Crashplan running too. It’ll catch up eventually.

Home has 400Mbps upload, work has 100Mbps upload so speed shouldn’t be the issue.

Is Syncthing the answer? Was thinking of doing a read-only share on the sending end.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    I’d give Syncthing a try. Though you should make some kind of tunnel so that they can communicate without relays, the speed there really depends on what traffic the relay is going through.

    • curbstickle@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 hours ago

      This was going to be exactly my suggestion

      Just to point out its utility, I use rclone at home to sync my data elsewhere, and I use it extensively at work for internal systems and client systems.

      Sync for replicating one site to another, bisync for bidirectional. Extremely flexible.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    16 hours ago

    Bare metal Ubuntu

    Just a nitpick but bare metal means no OS.

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        So I did some digging, and it’s less clear than I thought.

        I come from the embedded development world, where bare metal has been in use for a long time. Pre-2000s, this is exclusively what the term meant. Sometime around the mid 2000s, the virtualization services starting coopting the term.

        So I guess it does indeed mean both, but being a stickler for tradition it doesn’t sit right with me. The term just makes more sense when you’re applying it to the hardware; bare. No middleman, and that includes an OS.