Announcement

Collapse
No announcement yet.

Linux 4.0 Hard Drive Comparison With Six File-Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    ZFS is the most interesting file system, too bad you didn't test it.

    Comment


    • #12
      Originally posted by nils_ View Post
      I think FAT32 probably cuts a lot of corners in things that are implemented in (most) other filesystems so it's not going to be very realistic. It would be good to have some basic disk benchmarks though so you can see how much overhead the FS introduces - and also to see if a filesystem breaks specs for performance reasons (FUSE caching for example).
      That's exactly why I suggested it be included - FAT32 is so basic that it shows nothing but raw I/O performance. It has no optimizations, it has no special features, and it is unbiased. That makes it a great way to see where a filesystem really shines or regresses. In most cases, I figure that just about any filesystem will outperform FAT32, but by how much makes a difference too.

      I'm not sure if ExFAT does anything special besides allow for super huge file sizes or hard drives. If it is otherwise just as basic as FAT32, then ExFAT might be a better option, since it has no restrictions.

      Comment


      • #13
        Originally posted by Staffan View Post
        ZFS is the most interesting file system, too bad you didn't test it.
        Stable versions don't build with 4.0 (I think 3.16 is the most recent version that builds with 0.6.3).

        Comment


        • #14
          Originally posted by Staffan View Post
          ZFS is the most interesting file system, too bad you didn't test it.
          ZFS on linux is problematic for so many reasons. On FreeBSD, the test would be irrelevant because too many other factors would invalidate the results.

          I wish we could get ZFS on linux, but it's never going to happen since it can't be distributed that way. Right now, BTRFS is the only thing that is "equivalent". But you can't do anything like ZFS-3. ZFS-2 is still too experimental to trust. Right now it's single redundancy only, which sucks.

          But yeah, nothing beats checksum validation.

          Comment


          • #15
            Originally posted by nils_ View Post
            I think FAT32 probably cuts a lot of corners in things that are implemented in (most) other filesystems so it's not going to be very realistic. It would be good to have some basic disk benchmarks though so you can see how much overhead the FS introduces - and also to see if a filesystem breaks specs for performance reasons (FUSE caching for example).
            Agreed that FAT32 is likely to be problematic for various reasons. I'm not sure if there can ever be a good baseline file system, since every fs has its own performance properties, so a "maximum speed" fs does not exist. The closest you might get is to benchmark fs performance using a ramdisk, to eliminate IO and disk delays in order to benchmark the pure filesystem performance (it could be argued that doing so would make the test unrepresentative of real world use, but it might be interesting nonetheless).

            Interesting that there is so much variation due to fs in the synthetic kernel compile, I wonder if the filesystem in use would make any difference to the actual linux kernel compile benchmark, or whether even with 8 cores the benchmark time is still completely dominated by the CPU.

            Comment


            • #16
              Originally posted by Staffan View Post
              ZFS is the most interesting file system, too bad you didn't test it.
              He could run it on the same hardware using FreeBSD? You must be new around here if you're expecting something interesting from the benchmarks. For example the XFS benchmarks have failed in similar way as here as far as I can remember. When he's testing flash media, he doesn't use any flash friendly filesystems or even FAT. Only filesystems that people don't commonly use on flash media. How useful..

              Comment


              • #17
                The first two benchmarks (page 2, random write with Flexible IO Tester) aren't any use whatsoever because no mechanical HDD in existance has anything like those numbers. Realistic figures would be about the same as for random reads. It must be a cache issue of some sort...

                Comment


                • #18
                  Originally posted by phoronix View Post
                  Phoronix: Linux 4.0 Hard Drive Comparison With Six File-Systems

                  It's been a while since last running any Linux file-system tests on a hard drive considering all of the test systems around here are using solid-state storage and only a few systems commissioned in the Linux benchmarking test farm are using hard drives, but with Linux 4.0 around the corner, here's a six-way file-system comparison on Linux 4.0 with a HDD using EXT4, Btrfs, XFS, and even NTFS, NILFS2, and ReiserFS.

                  http://www.phoronix.com/vr.php?view=21624
                  I have to admit speed is one portion of the a filesystem testing. Something that I would like to see is a type of "reliability" / "corruption recovery" test. A kind of "change a bit or two randomly" and see if the filesystem can recover etc...

                  Comment


                  • #19
                    To those posting about FAT32 being an unbiased comparison point: it is not. Most USB sticks, SD cards and SSDs have special optimizations for it in their firmware.

                    Comment


                    • #20
                      Can someone explain why ntfs random read test was so high? If it's due to caching, I thought most linux filesystems have cached pages.

                      Comment

                      Working...
                      X