I am setting up a Linux server (probably will be NixOS) where my VM disk files will be stored on top of an NTFS partition. (Yes I know NTFS sucks but it has to be this way.)
I am asking which guest filesystem will have the best performance for a very mixed workload. If I had access to the extra features of BTRFS or ZFS I would use them but I have no idea how CoW interacts with NTFS; that is why I am asking here.
Also I would like some NTFS performance tuning pointers.
Ntfs isn’t going to care or even be aware of the hypervisor FS, zfs or btrfs would both work fine.
Making sure you don’t have misaligned sectors, is pretty much the only major pitfall. Make sure you use paravirt storage and network drivers.
Edit: I just realized you’re asking for the opposite direction, but ultimately the same guidelines apply. It doesn’t matter what filesystems are on what, with the above caveats.
Yeah, I’d just like to ask why it has to be NTFS as well. I want to help you to the best of my ability.
Now in terms of what you are trying to accomplish, it would sincerely help to know what kind of server you are trying to create and what you want it to accomplish so I can help you with the design. It would also help to know what hypervisor you are using to read those VM disks. Now, for the sake of the rest of this post, I’m going to assume that you are simply creating a Linux server meant to share those disks with a hyper-v hypervisor (to ensure we discuss the worst case… Lol… but also because I think the only reason you need NTFS is to support a Windows hypervisor).
Now if that assumption is correct, I would still ask why the filesystem must be NTFS. If all you need is a share for the files then why does the filesystem even matter? In my opinion, the worst case scenario that you have is a samba share which shares these files over as a Windows share. Now Windows is capable of better sharing utilizing either iscsi or NFS. None of these three sharing solutions (samba, iscsi, NFS) require an NTFS filesystem. Though I will admit that there may be some other thing that causes an issue with the potential of these solutions.
Anyways, not to pry, I want to help you though but a little more detail would significantly help with helping you find a solution.
What will this be running on?
Is it possible to do something like iscsi?
I have no idea how CoW interacts with NTFS
With btrfs you can disable COW for specific files, that might give you a little performance boost.
I don’t understand. Why would you store VM disks on NTFS? This isn’t a viable solution and you need to rethink your design. Also for guest filesystems I would go with ext4 as it has lower overhead while still being reasonably modern.
Within guests these days I just use XFS, UFS, or NTFS depending on the os. The hypervisor can have zfs or ceph.
Ufs seems weird to use outside of flash
It seems that way but it performs better than zfs on top of zfs. The only os I ran into that with was opnsense when I was playing with a virtualized firewall.
Don’t do ZFS on ZFS. It will destroy performance.
I personally go for EXT4 as is solid and light weight. It is also somewhat resistant to power loss
That’s what I said. Cow on top of cow is bad. Pretty sure ext4 isn’t on option on opnsense. UFS or zfs. Which is the only reason I mentioned it at all when presented with that choice.