I’m doing something right, and I’m doing something wrong.

A while ago I got a server and initial set of hard drives and began the work to have nice local storage. I had a hard drive for the main operating system, one for security camera footage, and then a pair of 14TB for the beginning of my main storage pool. I set up mergerfs and snapraid with one of them being the parity drive and one being a data drive. Everything was working well.

I recently picked up a set of 14TB drives to begin expanding the available storage. Edited the fstab file and snapraid config with the new information. Everything seemed to work fine, mergerfs showed them as having the combined size of the drives of 64TB total with (at the time) like 6TB used. Great. I start running an rclone script to copy stuff from remote storage to local and it’s chugging along fine until it stops today saying that there’s no available space. I look and it’s filled up one drive but refuses to spread the data to the newer drives.

Most of the setup was done from following guides and while I have gained some familiarity with Linux over time I’m stumped. The merged directory /mnt/storage looks like it has 51TB left but when I try to have rclone move files over it says there’s no space. I’ve rebooted and double checked the config files I can think of causing issues but everything looks fine. Any suggestions?

  • Excavate2445OP
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I think you’re meant to put a copy of the content file on each drive (you need it to perform recovery).

    Changing this and restarting seems to be what worked if anyone comes across this later.