Originally posted by pkese
View Post
I don't know if this is because btrfs can't use block cache properly or because ZFS's own caching is better. I wouldn't be surprised if this is a thing needed only by CoW filesystems.
The same "server" (It's the cheapest first-gen threadripper on a gaming asrock mobo and 128GB of ECC RAM, not a true server) with the same array of 20 SAS drives in the same system, arranged as RAID10 for both ZFS and btrfs, the same Windows VMs run like absolute lagfest with btrfs (5-10 seconds to register a click on screen), while with ZFS it's nearly as good as the same VM running on a mdadm raid with normal filesystem (ext4/xfs), even BEFORE I start adding SSDs as read/write cache for it.
Ah it also handles multiple VMs without any change, while running multiple VMs on a btrfs array is ridicolously worse.
And I'm limiting ZFS's cache to 32GB of RAM, while Linux page cache has no such limit and can use all the free RAM available, which is 100+GB (the server has 128GB, a single VM is using 16GB and there is only one up when testing, the host is a OpenSUSE Tumbleweed headless system so it's using less than 512MB of RAM.
Nevertheless, ZFS is known to perform very well on machines with lots of memory.
A HP Microserver Gen7 that is absolute garbage as far as CPU goes (it's an embedded AMD pre-ryzen APU with ECC support) with 4GB of RAM will still be able to sustain 50-70 MB/s sequential writes to the array over the network (it's a NAS with someone writing VM disk images into its samba shared folder) for hours on end on a RAID5 with compression enabled, btrfs RAID5 can't match that even on RAID10.
Comment