I’m doing something right, and I’m doing something wrong.

A while ago I got a server and initial set of hard drives and began the work to have nice local storage. I had a hard drive for the main operating system, one for security camera footage, and then a pair of 14TB for the beginning of my main storage pool. I set up mergerfs and snapraid with one of them being the parity drive and one being a data drive. Everything was working well.

I recently picked up a set of 14TB drives to begin expanding the available storage. Edited the fstab file and snapraid config with the new information. Everything seemed to work fine, mergerfs showed them as having the combined size of the drives of 64TB total with (at the time) like 6TB used. Great. I start running an rclone script to copy stuff from remote storage to local and it’s chugging along fine until it stops today saying that there’s no available space. I look and it’s filled up one drive but refuses to spread the data to the newer drives.

Most of the setup was done from following guides and while I have gained some familiarity with Linux over time I’m stumped. The merged directory /mnt/storage looks like it has 51TB left but when I try to have rclone move files over it says there’s no space. I’ve rebooted and double checked the config files I can think of causing issues but everything looks fine. Any suggestions?

  • Yendor@reddthat.com
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    Your Snapraid config looks ok, although I think you’re meant to put a copy of the content file on each drive (you need it to perform recovery). Also, did you change your parity drive from a 6TB to 14TB? Your parity drive needs to be the same size (or bigger) than the biggest drive in the array.

    I’m guessing the issue is in your mergerfs setup?

    • Excavate2445OP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      Hm, that would all be the /etc/fstab right? I thought I set it up fine because it showed the 64TB in/mnt/storage when I ran df -h but it’s possible I messed something up in there. I redacted out the serial numbers (not sure if needed but might as well) but otherwise that’s the fstab file. The first two 14TB drives were set to one parity and one data and when formatting them I followed https://zackreed.me/setting-up-snapraid-on-ubuntu/ which instructs on how to reserve 2% of the drives for overhead so that the parity drive always has enough space.

      /dev/disk/by-id/dm-uuid-LVM-LR3JAffs5mFXqzpLaQWaNEjjPJ4lNvEmeZG6d45IHGWFZfvEEeW3pFC6N0wDXMsd / ext4 defaults 0 1
      # /boot was on /dev/sdb2 during curtin installation
      /dev/disk/by-uuid/c4f1c5b7-dcef-407f-8bfa-ff2651579207 /boot ext4 defaults 0 1
      # /boot/efi was on /dev/sdb1 during curtin installation
      /dev/disk/by-uuid/4227-E912 /boot/efi vfat defaults 0 1
      /swap.img       none    swap    sw      0       0
      #2TB WD - Failed - Removed
      #/dev/disk/by-id/wwn-SN-part1 /mnt/secondary01 ext4 defaults 0 0
      
      #6TB Replacement for Failed Drive - Slot 02
      /dev/disk/by-id/wwn-SN-part2 /mnt/secondary01 ext4 defaults 0 0
      
      #14TB slot 03
      /dev/disk/by-id/wwn-SN-part1 /mnt/parity01 ext4 defaults 0 0
      
      #14TB slot 04
      /dev/disk/by-id/wwn-SN-part1 /mnt/disk01 ext4 defaults 0 0
      #14TB slot 05 - Seagate Exos
      /dev/disk/by-id/wwn-SN-part1 /mnt/disk02 ext4 defaults 0 0
      #14TB slot 06 - Seagate Exos
      /dev/disk/by-id/wwn-SN-part1 /mnt/disk03 ext4 defaults 0 0
      #14TB slot 07 - Seagate Exos
      /dev/disk/by-id/wwn-SN-part1 /mnt/disk04 ext4 defaults 0 0
      #14TB slot 08 - Seagate Exos
      /dev/disk/by-id/wwn-SN-part1 /mnt/disk05 ext4 defaults 0 0
      
      
      /mnt/disk* /mnt/storage fuse.mergerfs defaults,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=200G,fsname=mergerfs 0 0
      
  • Tranbi@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    What mergefs options did you pass in fstab? Check the mount options on GitHub. It took me way too long to discover ignorepponrename=true. In your case moveonenospc could be relevant.

    • Excavate2445OP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I pasted the fstab file in a reply to the other commenter. I do have moveonenospc=true in that file, should that be switched to false then?

  • Excavate2445OP
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    I think you’re meant to put a copy of the content file on each drive (you need it to perform recovery).

    Changing this and restarting seems to be what worked if anyone comes across this later.