Linux RAID Performance On NVMe M.2 SSDs With EXT4, Btrfs, F2FS
To little surprise, when starting things off with a SQLite database insertion test, EXT4 on RAID0 with the NVMe drives was the fastest but not much faster than the standalone MP500 on EXT4. F2FS was also competing very well with EXT4. Btrfs was the slowest file-system, due to its copy-on-write nature that by default it doesn't tend to be as performant with database type workloads. Interestingly, using F2FS with RAID1 caused a significant performance regression. At least in all the configurations except Btrfs, using the Corsair MP500 NVMe drives were a big upgrade over the Samsung 850 PRO.
Next up are different tests with FIO. Interestingly, LZO with Linux 4.13 is regressing things compared to many of our past Btrfs compression benchmarks. With Zstd Btrfs compression being added for 4.14, I'll have some updated Btrfs compression numbers soon. F2FS and EXT4 with RAID were leading the charts with random reads.
A similar story with IOPS. For the most part these results are just being put out for reference purposes if you are curious about NVMe RAID Linux performance.