You know, ZFS, ButterFS (btrfs…its actually “better” right?), and I’m sure more.

I think I have ext4 on my home computer I installed ubuntu on 5 years ago. How does the choice of file system play a role? Is that old hat now? Surely something like ext4 has its place.

I see a lot of talk around filesystems but Ive never found a great resource that distiguishes them at a level that assumes I dont know much. Can anyone give some insight on how file systems work and why these new filesystems, that appear to be highlights and selling points in most distros, are better than older ones?

Edit: and since we are talking about filesystems, it might be nice to describe or mention how concepts like RAID or LUKS are related.

  • Atemu
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Because of its design and the way it handles file storage, a typical defrag process is not applicable or even necessary in the same way it is with other traditional filesystems

    It absolutely is and it’s one of the biggest missing “features” in ZFS. If you use ZFS for performance-critical stuff, you have to make sure there’s always like >30% free space remaining because otherwise performance is likely to tank due to fragmentation.

    When fragmentation happens to a significant degree (and it can happen even with a ton of free space), you’re fucked because data is sorta written in stone in ZFS. You have to re-create the entire dataset if it fragments too much. Pretty insane.

    Btrfs too handles chunk allocation effeciently and generally doesn’t require defragmentation

    Hahaha no. Fragmentation is the #1 performance issue in btrfs next to transaction syncs. You’re recommended to do frequent free-space defragmentation on btrfs using filtered balances to prevent it.

    Additionally, in-file random writes cause a ton of fragmentation and kill performance in tasks which require it (VMs, DBs etc.).

    it does have a defrag command, it’s almost never used by anyone

    That’s absolutely false. As elaborated above, fragmentation is a big issue in btrfs. If you run a DB on CoW btrfs, you must manually defragment it to have any decent performance. (Autodefrag exists but it’s borderline broken.)

    Additionally, changing transparent compression is also done via defragmentation.

    Fragmentation is only really an issue for spinning disks

    This is false; fragmentation slows down SSDs aswell. The only difference is that, with SSDs, random IO is usually just one order of magnitude slower than sequential rather than HDDs’ two or three. It doesn’t affect SSDs nearly as much as HDDs but it still affects them; significantly so.