Announcement

Collapse
No announcement yet.

Linux 4.0 Hard Drive Comparison With Six File-Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 4.0 Hard Drive Comparison With Six File-Systems

    Phoronix: Linux 4.0 Hard Drive Comparison With Six File-Systems

    It's been a while since last running any Linux file-system tests on a hard drive considering all of the test systems around here are using solid-state storage and only a few systems commissioned in the Linux benchmarking test farm are using hard drives, but with Linux 4.0 around the corner, here's a six-way file-system comparison on Linux 4.0 with a HDD using EXT4, Btrfs, XFS, and even NTFS, NILFS2, and ReiserFS.

    http://www.phoronix.com/vr.php?view=21624

  • #2
    The NTFS Bench with a Userspace Driver is worthless ... Why do you don't make a test with the Kernel Driver from tuxera or the kernel Driver from Samsung for exFat? The last one is interesting because of new big SD Cards ...

    Comment


    • #3
      It should also be investigated why XFS fails a few tests (with certain IO sizes).

      Comment


      • #4
        No Btrfs compression tests ?

        The results would only be meaningful if no zero-content files would be created - but still ...

        Comment


        • #5
          I think including FAT32 would be a good "control" filesystem for these tests. It's very basic, so it is good at showing the raw performance of a drive. In other words, any tests that perform better than FAT32 means gained efficiency, and any tests that perform worse than FAT32 means something is being bottlenecked and developers should take note. Every modern filesystem on any OS has a lot of additional layers and features that affect performance, so it's hard to gauge which filesystems are actually improving and which are just falling behind.

          With things like NTFS over FUSE, a lot of the performance gains are done through caching, so it's hard to know how fast it REALLY is reading/writing to disk.

          Comment


          • #6
            Were the sequential write tests also done with the 128KB configuration? You don't mention why XFS wasn't included in that test.

            Comment


            • #7
              For the FIO, is it strange to have ext4 slow down when going from block size from 4KB to 128KB?

              And for BTRFS, is the big difference caused by the metadata overhead of the 4KB chunks?

              What's up with ReiserFS? Isn't that file system dead?

              I love these storage benchmarks. Keep up the great work!

              Comment


              • #8
                Originally posted by AndyChow View Post
                What's up with ReiserFS? Isn't that file system dead?
                As dead as ext3 I suppose...
                With that said, the extremely popular distributions that used reiserfs are phasing it out. It *will* die over time... but still has some years due to the transition away from it.

                If XFS had remained unreliable and/or unstable, there wouldn't be a suitable replacement for it (even today). With that said, btrfs continues to improve (talking about reliability and predictability)....

                So, if you needs something to replace reiserfs today, best bet is XFS. Down the road, it might be btrfs (and might not be too far down the road).

                ext4 is popular, but obviously still uses a somewhat dated filesystem design. But desktop users probably don't need much more. There's a reason that the Red Hat boyz put a ton of effort into XFS.... they know they need more than ext4. And as Fedora switches to XFS as its default, even ext4 may start to disappear... who knows?

                IMHO, there's room for another filesystem to come and blow them all away (any takers?). Maybe Reiser4 (now that Linux has gone corporate, it's not allowed in the kernel because there isn't "big money" behind it). I have a spot in my heart for log based filesystems like nilfs2...
                Momentum is clearly behind btrfs right now... we'll see.

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  I think including FAT32 would be a good "control" filesystem for these tests. It's very basic, so it is good at showing the raw performance of a drive. In other words, any tests that perform better than FAT32 means gained efficiency, and any tests that perform worse than FAT32 means something is being bottlenecked and developers should take note. Every modern filesystem on any OS has a lot of additional layers and features that affect performance, so it's hard to gauge which filesystems are actually improving and which are just falling behind.

                  With things like NTFS over FUSE, a lot of the performance gains are done through caching, so it's hard to know how fast it REALLY is reading/writing to disk.
                  Good point. FAT32 would be a control, which so many tests are lacking these days.

                  Comment


                  • #10
                    Originally posted by gamerk2 View Post
                    Good point. FAT32 would be a control, which so many tests are lacking these days.
                    I think FAT32 probably cuts a lot of corners in things that are implemented in (most) other filesystems so it's not going to be very realistic. It would be good to have some basic disk benchmarks though so you can see how much overhead the FS introduces - and also to see if a filesystem breaks specs for performance reasons (FUSE caching for example).

                    Comment

                    Working...
                    X