I tried to find a more applicable community to post this to but didn’t find anything.

I recently set up a NAS/server on a Raspberry Pi 5 running Raspberry Pi OS (see my last post) and since then I’ve got everything installed into a 3D printed enclosure and I’ve got RAID set up (ZFS RAIDz1). Prior to setting up RAID, I could transfer files to/from the NAS at around 200MB/s, but now that RAID is seemingly working things are transferring at around 28-30 MB/s. I did a couple searches and found someone suggesting to disable sync ($ sudo zfs set sync=disabled zfspool). I tried that and it doesn’t seem to have had any effect. Any suggestions are welcome but keep in mind that I barely know what I’m doing.

Edit: When I look at the SATA hat, the LEDs indicate that the drives are being written to for less than half a second and then there’s a break of about 4 seconds where there’s no writing going on.

  • 3dcadminA
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    A Pi with a hat and 5 sata drives isn’t ever going to perform very well with zfs in raid z1. cpu limited, io limited possibly even ram limited quite quickly. I love zfs but it a bit of a performance hog, though pretty bulletproof if setup correctly! the hat will be through a pci x1 interface, so at least set the pci to gen3 speeds. Jeff tells you about it here

    https://www.jeffgeerling.com/blog/2023/forcing-pci-express-gen-30-speeds-on-pi-5

    • ramenshaman@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      People keep saying the Pi is CPU limited but it hasn’t gotten over 10% between all 4 cores while I’ve been transferring files. 16GB of RAM doesn’t seem very limited to me either. Thanks for the link!

      • 3dcadminA
        link
        fedilink
        English
        arrow-up
        3
        ·
        24 hours ago

        It isn’t that limited, but for zfs arm seems to perform much worse. Plus you often don’t get a full idea of true system load. the biggest limitation is the io, it is very bad for 5 drives in zfs raidz1. the data is distributed across all 5 drives with parity as well. the pi can only do around 500 meg transfers for an nvme drive whilst many other platforms will see 3000 meg, that is why it suffers so much in this case as that 500 meg is across 5 drives. tops you’d get is 10 meg transfer I reckon and that is roughly what you are seeing. you’d be better off with 3 larger drives in raidz1