Btrfs, EXT4 & ZFS On A Solid-State Drive
Phoronix: Btrfs, EXT4 & ZFS On A Solid-State Drive
With the benchmarks recently looking at the performance of ZFS on FreeBSD versus EXT4/Btrfs on Linux having generated much interest and a very long discussion, this morning we are back with more benchmarks when running ZFS on FreeBSD/PC-BSD 8.1 and Btrfs and EXT4 on an Ubuntu Linux 10.10 snapshot with the most recent kernel, but this time the disk benchmarking is being done atop a high-performance solid-state drive courtesy of OCZ Technology and the CPU is an Intel Core i7. The drive being tested across these three leading file-systems is the OCZ Vertex 2 that promises maximum reads up to 285MB/s, maximum writes up to 275MB/s, and sustained writes up to 250MB/s.
Did you use the "ssd" mount option for btrfs?
ZFS can use a flash drive as a cache device or intent log device.
For anyone considering a NAS or other multi-disk build with ZFS it is an interesting option.
No, it's no longer needed or used. Btrfs will auto-detect if it's an SSD and apply optimizations accordingly. You can check in the dmesg when mounting Btrfs on an SSD and you should automatically see a message about SSD optimizations.
Originally Posted by ernstp
Why not compare with the native ZFS performance in (open)Solaris instead of *BSD?
OpenSolaris b134 wouldn't boot on the ThinkPad W510.
Originally Posted by wpoely86
Does your kernels (or udev) detect your SSD disks as no rotational?
Looks like BSD+ZFS got murdered to me.
ext4 SSD tweaks?
Has anyone with an SSD tried using the RAID optimisation options in ext4 (also available in ext2/3) to tweak it for better SSD performance?
What I have in mind is that the underlying flash memory block size on an SSD could be considered to be analogous to the RAID stripe width in that modifying a smaller or non-aligned block of data results in a read-modify-write cycle. The '-E stripe-width=n' option to mke2fs tells the filesystem block allocator to place data so as to try to avoid read-modify-write cycles if possible (i.e. align it to the start of a block and fill an entire block wherever possible).
If it's possible to find out from an SSD manufacturer (or even by querying the drive?) the flash block size, it might be interesting to compare performance of a drive set up "any old how" with one containing a partition that is aligned to the start of a flash block bearing a filesystem created with the stripe width option. One would expect to see some difference in the write tests but not in the read tests.
Guys, thanks for your nice benchmarks on FSs, but you repeatedly ignore the CPU usage/system-load on such tests. There is a bottleneck here somewhere (Btrfs HDD vs SSD) which needs to be identified.
Tags for this Thread