8-Way Linux 3.13 File-System Benchmarks
Phoronix: 8-Way Linux 3.13 File-System Benchmarks
After last week delivering SSD file-system tests and HDD file-system tests of the Linux 3.13 development kernel compared to the stable Linux 3.12 kernel. The earlier testing was limited to the popular EXT4, Btrfs, XFS, and F2FS file-systems, but out for your viewing pleasure today is an eight-way Linux 3.13 file-system comparison on Ubuntu.
Thank you Michael for using real "spinning rust" HDD for the tests. The Raptor is not your typical HDD but it should be much more representative for real world scenarios than pure SSD tests.
Results for HDD are useful while SSDs with power loss protection are overpriced.
I don't care about speed on mechanical disks, I use them for data storage only, where performance is secondary (at least for me).
Will we see the same benchmarks run on the SSD, too?
I think over half computers coming out there are still hard-disk-based. At work I have an HDD and a cheap-ish SDD, but work cycles through a lot of disk space, so compilation and db work usually go on the rotational disks. I appreciate the HDD reports more than the SSD ones.
Originally Posted by Vim_User
What I would like to see, but I understand it would be quite convoluted, is to see a shootout with the fastest possible set of options for each workload/filesystem.
I don't care if my dev db is lost on the very rare event of a crash (I recreate it every couple of days regardless, due to patching tests we do), or even if the very rare code loss happens (since I commit every hour or so). Power outages and crashes are very rare where I live, if the performance differential is enough, I would risk (since it's not even certain it would happen) losing a couple of changes once a year. Lots of devs also program on laptops around here, so I suppose it would matter even less for them...
I think there is a valuable lesson for mechanical disk owners though, that you should definitely reformat off NTFS volumes you don't plan on sticking near another Windows box because the FS is really freaking slow.
Originally Posted by Vim_User
I do like how in the last 3 years btrfs came from usually half the performance of ext4 on mechanical disks to within a margin of error.
Is there a problem with the threaded IO tests which are included in the full results link? Most of the filesystems tested have terrible performance except for NTFS which has a massive advantage.
Agree! Although BTRFS is terrible in the Compile Bench - Initial Create benchmark, which I guess is not too bad from a user's perspective because you rarely have to create files in daily desktop usage.
Originally Posted by zanny
BTW, why do Phoronix tests always only show Initial Create in Compile Bench instead of the actual compilation time? I'd think the actual compilation time would be more informative...
That's probably because the btrfs default mount options are quite conservative: space_cache and inode_cache (once it's completed creating the cache*) should give noticeably better performance, as would LZO compression. Setting the skinny-metadata flag with btrfstune might also help. There needs to be some new benchmarks for the various options IMO.
Originally Posted by stan
* In the earlier Phoronix article testing out the various mount options http://www.phoronix.com/scan.php?pag..._options&num=4, I don't think enough time was left to allow inode_cache IO to complete before the test run; it can take some time.
Michael, I wrote ZFSOnLinux Linux 3.13 support within a few days of the Linux 3.13-rc1 tag and it was accepted upstream, which means GIT HEAD has Linux 3.13 support. For the purpose of benchmarking HEAD, you probably want to do a checkout and build your own packages:
That being said, I hope that you try to present a balanced view of your results (e.g. include comments from people who wrote the code with your own comments) so that I do not regret posting this, but if the past is any indication, I do not have high hopes.
Last edited by ryao; 01-09-2014 at 08:42 PM.