Just got all the hardware set up and working today, super stoked!

In the pic:

  • Raspberry Pi 5
  • Radxa Penta SATA hat for Pi
  • 5x WD Blue 8TB HDD
  • Noctua 140mm fan
  • 12V -> 5V buck convertor
  • 12V (red), 5V (white), and GND (black) distribution blocks

I went with the Raspberry Pi to save some money and keep my power consumption low. I’m planning to use the NAS for streaming TV shows and movies (probably with Jellyfin), replacing my google photos account (probably with Immich), and maybe steaming music (not sure what I might use for that yet). The Pi is running Raspberry Pi Desktop OS, might switch to the server version. I’ve got all 5 drives set up and I’ve tested out streaming some stuff locally including some 4K movies, so far so good!

For those wondering, I added the 5V buck convertor because some people online said the SATA hat doesn’t do a great job of supplying power to the Pi if you’re only providing 12V to the barrel jack, so I’m going to run a USB C cable to the Pi. Also using it to send 5V to the PWM pin on the fan. Might add some LEDs too, fuck it.

Next steps:

  • Set up RAID 5 ZFS RAIDz1?
  • 3D print an enclosure with panel mount connectors

Any tips/suggestions are welcome! Will post again once I get the enclosure set up.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    2 days ago
    • That power situation looks suspicious. You better know what you’re doing so you don’t run into undercurrent events under load.
    • Use ZFS RAIDz1 instead of RAID 5.
    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      2 days ago

      Ultimately I would love to use ZFS but I read that it’s difficult to expand/upgrade. Not familiar with ZFS RAIDz1 though, I’ll look into it. Thanks!

      I build robots for a living, the power is fine, at least for a rough draft. I’ll clean everything up once the enclosure is set up. The 12V supply is 10A which is just about the limit of what a barrel jack can handle and the 5V buck is also 10A, which is about double what the Pi 5 power supply can provide.

      • CmdrShepard49@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 days ago

        Z1 is just single parity.

        AFAIK expanding a ZFS pool is a new feature. Its used in Proxmox but their version hasn’t been updated yet, so I don’t have the ability to try it out yet. It t should be available to you otherwise.

        Sweet build! I have all these parts laying around so this would be a fun project. Please share your enclosure design if you’d like!

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          2 days ago

          Basically the equivalent of RAID 5 in terms of redundancy.

          You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.

          • heatermcteets@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            Doesn’t losing a vdev cause the entire pool to be lost? I guess to your point with sufficient redundancy for new vdev 1 drive redundancy whether 3 disks or 5 is essentially the same risk. If a vdev is added without redundancy that would increase risk of losing the entire pool.

            • Avid Amoeba@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 days ago

              Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.

              The instructions seem correct but there’s some room for improvement.

              Instead of using logical device names like this:

              sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f
              

              You want to use hardware IDs like this:

              sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
              

              You can discover the mapping of your disks to their logical names like this:

              ls -la /dev/disk/by-id/*
              

              Then you also want to add these options to the command:

              sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...
              

              These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).

              Your final create command should look like:

              sudo zpool create  -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
              

              You can experiment till you get your final creation command since creation/destruction is very fast. Don’t hesitate to create/destroy multiple times till you got it right.

      • eneff@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        ZRAID expansion is now better than ever before!

        In the beginning of this year (with ZFS 2.3.0) they added zero-downtime expansion along with some other things like enhanced deduplication.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 days ago

        ZFS is so… So much better. In every single way. Change now before it’s too late, learn and use the features as you go.

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          This. Also it’s not difficult to expand at all. There are multiple ways. Just ask here. You could also ask for hypothetical scenarios now if you like.

      • Creat@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        ZFS, specifically RaidZx, can be expanded like and raid 5/6 these days, assuming support from the distro (works with TrueNAS for example). The patches for this have been merged years ago now. Expanding any other array (like a striped mirror) is even simpler and is done by adding VDevs.