You know, ZFS, ButterFS (btrfs…its actually “better” right?), and I’m sure more.
I think I have ext4 on my home computer I installed ubuntu on 5 years ago. How does the choice of file system play a role? Is that old hat now? Surely something like ext4 has its place.
I see a lot of talk around filesystems but Ive never found a great resource that distiguishes them at a level that assumes I dont know much. Can anyone give some insight on how file systems work and why these new filesystems, that appear to be highlights and selling points in most distros, are better than older ones?
Edit: and since we are talking about filesystems, it might be nice to describe or mention how concepts like RAID or LUKS are related.
Not OP, but yes, that’s pretty much how it works. (ZFS scrubs do not defrgment data however).
Fragmentation isn’t really a problem for several reasons.
Some (most?) COW filesystems have mechanisms to mitigate fragmentation. ZFS, for instance, uses a special allocation strategy to minimize fragmentation and can reallocate data during certain operations like resilvering or rebalancing.
ZFS doesn’t even have a traditional defrag command. Because of its design and the way it handles file storage, a typical defrag process is not applicable or even necessary in the same way it is with other traditional filesystems
Btrfs too handles chunk allocation effeciently and generally doesn’t require defragmentation, and although it does have a defrag command, it’s almost never used by anyone, unless you have a special reason to (eg: maybe you have a program that is reading raw sectors of a file, and needs the data to be contiguous).
Fragmentation is only really an issue for spinning disks, however, that is no longer a concern for most spinning disk users because:
Enterprise users also almost always use a RAID (or similar) setup, so the same as above applies. They also use filesystems like ZFS which employs heavy caching mechanisms, typically backed by SSDs/NVMes, so again, fragmentation isn’t really an issue.
Cool, good to know. I’d be interested to learn how they mitigate fragmentation, though. It’s not clear to me how COW could mitigate the copy cost without fragmentation, but I’m certain people smarter than me have been thinking about the problem for my whole life. I know spinning disks have their own set of limitations, but even SSDs perform better on sequential reads over random reads, so it seems like the preference would still be to not split a file up too much.
It absolutely is and it’s one of the biggest missing “features” in ZFS. If you use ZFS for performance-critical stuff, you have to make sure there’s always like >30% free space remaining because otherwise performance is likely to tank due to fragmentation.
When fragmentation happens to a significant degree (and it can happen even with a ton of free space), you’re fucked because data is sorta written in stone in ZFS. You have to re-create the entire dataset if it fragments too much. Pretty insane.
Hahaha no. Fragmentation is the #1 performance issue in btrfs next to transaction syncs. You’re recommended to do frequent free-space defragmentation on btrfs using filtered balances to prevent it.
Additionally, in-file random writes cause a ton of fragmentation and kill performance in tasks which require it (VMs, DBs etc.).
That’s absolutely false. As elaborated above, fragmentation is a big issue in btrfs. If you run a DB on CoW btrfs, you must manually defragment it to have any decent performance. (Autodefrag exists but it’s borderline broken.)
Additionally, changing transparent compression is also done via defragmentation.
This is false; fragmentation slows down SSDs aswell. The only difference is that, with SSDs, random IO is usually just one order of magnitude slower than sequential rather than HDDs’ two or three. It doesn’t affect SSDs nearly as much as HDDs but it still affects them; significantly so.