Btrfs RAID: Built-In/Native RAID vs. Mdadm

Written by Michael Larabel in Storage on 3 November 2014 at 11:00 AM EST. Page 1 of 4. 26 Comments.

Last month on Phoronix I posted some dual-HDD Btrfs RAID benchmarks and that was followed by Btrfs RAID 0/1/5/6/10 testing on four Intel solid-state drives. In still testing the four Intel Series 530 SSDs in a RAID array, the new benchmarks today are a comparison of the performance when using Btrfs' built-in RAID capabilities versus setting up a Linux 3.18 software RAID with Btrfs on the same hardware/software using mdadm.

The RAID 0, 1, 10, 5, and 6 levels were all tested using the built-in Btrfs capabilities (Btrfs RAID 5/6 support remains experimental) and then compared to setting up an mdadm array with the same disks at the different levels and then using Btrfs with its default mount options. The Linux 3.18-rc1 kernel was used throughout all testing atop the Ubuntu 14.10 x86_64 Utopic Unicorn release.

The Intel Core i7 5960X Haswell-E system was still in use and the four Intel SSDSC2BW12 (Series 530) 120GB solid-state drives were used to form the redundant array of inexpensive disks. The default mount options and other settings were used when mounting the Btrfs file-systems and configuring the RAID arrays. One important item to note is that when using the native RAID support in Btrfs (or when using Btrfs on an SSD without RAID), the solid-state drive mount option (ssd) is automatically used but when running off the mdadm array of SATA SSDs, Btrfs doesn't know that all of the underlying drives are solid-state storage.

Native Btrfs vs. mdadm RAID

On the following pages are our native Btrfs RAID vs. mdadm RAID benchmarks using the Phoronix Test Suite benchmarking software with FIO, FS-Mark, IOzone, and Compile Bench. Coming up following this article will be a Linux SSD RAID comparison using a number of different Linux file-systems.


Related Articles