Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: 8-Way Linux 3.13 File-System Benchmarks

  1. #1
    Join Date
    Jan 2007
    Posts
    13,431

    Default 8-Way Linux 3.13 File-System Benchmarks

    Phoronix: 8-Way Linux 3.13 File-System Benchmarks

    After last week delivering SSD file-system tests and HDD file-system tests of the Linux 3.13 development kernel compared to the stable Linux 3.12 kernel. The earlier testing was limited to the popular EXT4, Btrfs, XFS, and F2FS file-systems, but out for your viewing pleasure today is an eight-way Linux 3.13 file-system comparison on Ubuntu.

    http://www.phoronix.com/vr.php?view=19594

  2. #2
    Join Date
    Apr 2011
    Location
    Sofia, Bulgaria
    Posts
    74

    Default

    Thank you Michael for using real "spinning rust" HDD for the tests. The Raptor is not your typical HDD but it should be much more representative for real world scenarios than pure SSD tests.

  3. #3
    Join Date
    Sep 2012
    Posts
    311

    Default

    Results for HDD are useful while SSDs with power loss protection are overpriced.

  4. #4
    Join Date
    Jan 2013
    Posts
    972

    Default

    I don't care about speed on mechanical disks, I use them for data storage only, where performance is secondary (at least for me).
    Will we see the same benchmarks run on the SSD, too?

  5. #5
    Join Date
    Oct 2011
    Posts
    13

    Default

    Quote Originally Posted by Vim_User View Post
    I don't care about speed on mechanical disks, I use them for data storage only, where performance is secondary (at least for me).
    Will we see the same benchmarks run on the SSD, too?
    I think over half computers coming out there are still hard-disk-based. At work I have an HDD and a cheap-ish SDD, but work cycles through a lot of disk space, so compilation and db work usually go on the rotational disks. I appreciate the HDD reports more than the SSD ones.

    What I would like to see, but I understand it would be quite convoluted, is to see a shootout with the fastest possible set of options for each workload/filesystem.

    I don't care if my dev db is lost on the very rare event of a crash (I recreate it every couple of days regardless, due to patching tests we do), or even if the very rare code loss happens (since I commit every hour or so). Power outages and crashes are very rare where I live, if the performance differential is enough, I would risk (since it's not even certain it would happen) losing a couple of changes once a year. Lots of devs also program on laptops around here, so I suppose it would matter even less for them...

  6. #6
    Join Date
    Dec 2012
    Posts
    367

    Default

    Quote Originally Posted by Vim_User View Post
    I don't care about speed on mechanical disks, I use them for data storage only, where performance is secondary (at least for me).
    Will we see the same benchmarks run on the SSD, too?
    I think there is a valuable lesson for mechanical disk owners though, that you should definitely reformat off NTFS volumes you don't plan on sticking near another Windows box because the FS is really freaking slow.

    I do like how in the last 3 years btrfs came from usually half the performance of ext4 on mechanical disks to within a margin of error.

  7. #7
    Join Date
    Nov 2013
    Posts
    39

    Default

    Is there a problem with the threaded IO tests which are included in the full results link? Most of the filesystems tested have terrible performance except for NTFS which has a massive advantage.

  8. #8
    Join Date
    Jan 2008
    Posts
    130

    Default

    Quote Originally Posted by zanny View Post
    I do like how in the last 3 years btrfs came from usually half the performance of ext4 on mechanical disks to within a margin of error.
    Agree! Although BTRFS is terrible in the Compile Bench - Initial Create benchmark, which I guess is not too bad from a user's perspective because you rarely have to create files in daily desktop usage.

    BTW, why do Phoronix tests always only show Initial Create in Compile Bench instead of the actual compilation time? I'd think the actual compilation time would be more informative...

  9. #9
    Join Date
    Feb 2013
    Posts
    79

    Default

    Quote Originally Posted by stan View Post
    Agree! Although BTRFS is terrible in the Compile Bench - Initial Create benchmark, which I guess is not too bad from a user's perspective because you rarely have to create files in daily desktop usage.

    BTW, why do Phoronix tests always only show Initial Create in Compile Bench instead of the actual compilation time? I'd think the actual compilation time would be more informative...
    That's probably because the btrfs default mount options are quite conservative: space_cache and inode_cache (once it's completed creating the cache*) should give noticeably better performance, as would LZO compression. Setting the skinny-metadata flag with btrfstune might also help. There needs to be some new benchmarks for the various options IMO.

    * In the earlier Phoronix article testing out the various mount options http://www.phoronix.com/scan.php?pag..._options&num=4, I don't think enough time was left to allow inode_cache IO to complete before the test run; it can take some time.

  10. #10

    Default

    Michael, I wrote ZFSOnLinux Linux 3.13 support within a few days of the Linux 3.13-rc1 tag and it was accepted upstream, which means GIT HEAD has Linux 3.13 support. For the purpose of benchmarking HEAD, you probably want to do a checkout and build your own packages:

    http://zfsonlinux.org/generic-deb.html

    That being said, I hope that you try to present a balanced view of your results (e.g. include comments from people who wrote the code with your own comments) so that I do not regret posting this, but if the past is any indication, I do not have high hopes.
    Last edited by ryao; 01-09-2014 at 08:42 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •