I was considering looking into installing Void with ZFS on root if I ever need to reinstall the OS on my computer. So far the advantages I have read about have been mostly about snapshotting and restoration but I am admittedly more interested because it’s a shiny new file system.
I am using a laptop with 250GB SSD. Would I benefit from using ZFS? Or is it overhyped? Any input is appreciated.
I started using BTRFS recently for the same purposes alongside transparent compression, and from what I have read both ZFS and BTRFS function well for this purpose.
ZFS handles RAID5 or RAID6 better if I recall but for a single drive root either works. Additionally, while ZFS has high ram requirements, this only seems to happen on very large filesystems (>>1TB) which would not affect your use case.
One thing to note is that the “btrfs-convert” command can be used to convert an ext4 partition to btrfs without reinstalling, which is nice when a distro’s installer doesn’t play nice with non-ext filesystems. https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-convert
transparent compression
I am guessing this means the files are compressed when they are stored on disk. Is the space saved this way significant? I have only 250GB space so it would be nice if it was.
On my system with a ~60GB partition, here is the output of compsize run on my btrfs subvolumes:
sudo compsize -x / Processed 299821 files, 195147 regular extents (197447 refs), 159626 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 40% 5.1G 12G 12G none 100% 1.1G 1.1G 1.1G zstd 34% 3.9G 11G 11G
sudo compsize -x /var Processed 21378 files, 17517 regular extents (17628 refs), 13617 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 76% 1.5G 1.9G 1.9G none 100% 1.1G 1.1G 1.1G zstd 39% 317M 800M 787M
sudo compsize -x /tmp Processed 7 files, 5 regular extents (5 refs), 2 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 99% 48K 48K 48K none 100% 48K 48K 48K zstd 69% 392B 564B 564B
sudo compsize -x /home Processed 50736 files, 184400 regular extents (196958 refs), 19503 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 78% 9.0G 11G 10G none 100% 7.1G 7.1G 6.0G zstd 42% 1.8G 4.3G 4.3G
Which is a savings of 9.3G or 38%. My fstab has the option compress-force=zstd:9 to enable this.
That is actually super good. I will have to look into the CPU overhead for compression/decompression.
Thanks for the detailed answer.
Here’s some extremely unscientific data I collected:
CPU: Ryzen 2600x with PBO limits maxed zstd:1 - 38% compression ratio, ~1000MB/s write speed, all cores maxed out zstd:3 - 35% compression ratio, ~800MB/s write speed, all cores maxed out zstd:9 - 33% compression ratio, ~217MB/s write speed, all cores between 50% - 100% while running CPU: Ryzen 2600x cTDP set to 35 watts and only 2 cores & 4 threads enabled to simulate a weak processor zstd:1 - 400MB/s zstd:3 - 300MB/s zstd:9 - 82MB/s Same compression ratios and CPU usage
Notes:
- A NVME SSD with a sequential write speed of >1000MB/s was used.
- The command used for testing was “pv enwik9 enwik9 enwik9 enwik9 enwik9 enwik9> enwik9.6” where enwik9 is the file from http://prize.hutter1.net/
- The BTRFS filesystem was on a LUKS encrypted device so there was some AES overhead, but the 2600x supports hardware accelerated AES instructions so it shouldn’t play that large of a role
- vm.dirty.ratio set to 1
- Cores enabled were 1 on each CCX
- Decompression was too fast and was bottlenecked by the SSD
This is some great work. Thank you very much.
deleted by creator
deleted by creator
Good resources. Thanks for sharing. :thumbs up: