Announcement

Collapse
No announcement yet.

10-Way Linux File-System Comparison On Linux 3.10

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • 10-Way Linux File-System Comparison On Linux 3.10

    Phoronix: 10-Way Linux File-System Comparison On Linux 3.10

    On the latest Linux 3.10 stable kernel we have taken ten common Linux file-systems and generated an interesting performance comparisons. The Linux file-systems being tested in this article include XFS, Btrfs, EXT2, EXT3, EXT4, ReiserFS, Reiser4, JFS, F2FS, and ZFS.

    http://www.phoronix.com/vr.php?view=19019

  • #2
    Typo on page2:
    The results for FS-Mark were less spread in the multi-threaded test case with EXT4 and Btrfs and ZFS all being front-runners while Reiser4, ReiserFS, and ZFS were the slower ones.
    I think you meant XFS was a front-runner...

    Comment


    • #3
      Could it be feasible to include Windows and OS X with their default filesystems into such tests as an additional reference point?
      Latvian Open Source Association co-founder, Debian Developer, Pythonista

      Comment


      • #4
        Tux3

        Nice test!

        Too bad Tux3 isn't included.

        Comment


        • #5
          Originally posted by aigarius View Post
          Could it be feasible to include Windows and OS X with their default filesystems into such tests as an additional reference point?
          No, for Windows too much time with too little ROI for all Windows tests when I'm doing everything myself and already running short... For OS X, not enough Apple hardware...
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Thanks for the Summer Gift Michael!

            I was looking forward to using F2FS on a production machine once it is a bit more mature with kernel 3.11... now I think I'll stay with ext2 on my SSD (/home being on an ext4 partition).

            Could someone using F2fs for his system provide feedback please?
            Thx

            Comment


            • #7
              Originally posted by BO$$
              So basically Ext2 is better than Ext4. Nice! You probably call this evolution. Keep going file system developers. Crush the performance of the file system. So it's either ReiserFS or Ext2 since the rest are shit. Too bad one is made by a jailed murderer! Hahaha! You never get tired with Linux! Your best coders have to be removed from society for the good of us all! Hahahahahahahahahaaaaaa
              Maybe you should actually think before you post. Ext 2 does not have journaling explaining why it is faster. There is always a trade off when it comes to speed vs data integrity.

              It's like comparing FAT32 vs NTFS. FAT32 is usually faster but is a lot riskier then NTFS in terms of integrity.

              http://technet.microsoft.com/en-us/l.../cc938440.aspx
              Last edited by deanjo; 08-08-2013, 03:57 PM.

              Comment


              • #8
                suggestion for addition to phoronix filesystem benchmarks

                Joseph Bacik (a btrfs dev) looked into the fio benchmark poor performance of btrfs, and he concluded that it is because btrfs does poorly with non-4K aligned workloads. He found that adding

                ba=4k

                to the fio script brought btrfs performance up to be closer to ext4 and XFS.

                Therefore, I suggest an additional phoronix filesystem benchmark, which would be almost identical to the current fio benchmark, except with the addition of 'ba=4k' to the fio script.

                This would allow people whose workloads are primarily 4K-aligned to see a comparison of the filesystems with a workload more similar to their own. It also provides a nice contrast with the non-4K-aligned benchmark so that people can see what filesystems do much better with unaligned writes.

                Comment


                • #9
                  Reiser 4 is really a mystery here! Its performance is way better than someone would expect for being an out of main Kernel tree filesystem with just one developer working on it at his spare time for fun?? ZFS on the other hand is what we would expect...

                  Michael are there any "holes" at the tests? Meaning like the one showing EXT2 performance at about 530MB/sec where you point out that it is not writing all of the data because it is not in sync with the disc??

                  Furthermore I see btrfs improving and catching up EXT4! F2FS looks promising but needs more work to become a stantard central filesystem for ssds, XFS is also as mature as EXT4 but slightly slower!

                  EXT4 is still the bext for default in my opinion! But also I believe Reiser 4 deserves some more love from Kernel and FS devs!

                  Comment


                  • #10
                    Prefer "Optimal" or "Common" mount options rather than "Default"

                    I know that it is hard to do a fair comparison of FSs since many of them are highly configurable to suit magnetic vs electronic media, streaming reads vs random writes, etc. However, as Michael's [recent](www.phoronix.com/vr.php?view=18940) article has shown, there are some options that are highly likely to be used by the vast majority of sysadmins/users. As an anecdotal example, I always use the `relatime,compress=lzo,space_cache` options when creating a btrfs filesystem. If the disk is an SSD, I always add `ssd,discard`. I'm sure that there are generally optimal options for all FSs for general use. I would love to see comparison benchmarks done with these optimal or generally recommended mount options instead of the default. Since I presume that anyone who cares enough to read benchmarks is going to use non-default mount options; this seems to make the benchmarks reflective of the real world ('nix-geek subset ).

                    Other than avoiding flame wars over the "generally used" options, why not?

                    Comment


                    • #11
                      Originally posted by justinzane View Post
                      Other than avoiding flame wars over the "generally used" options, why not?
                      The default options are generally default for a reason -- they are safe and give reasonably good performance under a wide variety of workloads and environments.

                      You mention the discard option. That actually hurts performance with some SSDs, since some SSDs do not behave well when given a large list to TRIM. That is probably why it is not default.

                      Using compression is a very bad choice with many of the benchmarks phoronix runs, since many of the benchmarks are writing streams of zeros, which compress exceedingly well, unlike more realistic data.

                      Comment


                      • #12
                        Suggest using Common/Optimal Mount Options

                        I, and I presume most other Linux sys admins, have what I call call "common" mount options for various file systems that we use. As an example, I almost always mount rotational btrfs filesystems with `relatime,compress=lzo,space_cache` and add `,ssd,discard` for SSDs. I'm sure other more experienced users have "common" settings like this for every filesystem they regularly use. Additionally, since phoronix/openbenchmarking seems oriented towards FOSS users who are sophisticated and interested enough to read benchmarks; it follows that most of us use the results of you mount option comparisons as a guide when we setup systems.

                        Therefore, I suggest determining, either through results of your filesystem specific tests or via poll of users, what mount options are preferred for general use for each filesystem. Note that I say general use since corner cases like the enterprise master LDAP server have different requirements than other specific systems like streaming CND hosts. That way, your inter-filesystem benchmarks will be more representative both of the best performance the FS can give and the common user experience.

                        Hope that makes sense, and thanks!

                        Comment


                        • #13
                          Confused...

                          Originally posted by jwilliams View Post
                          The default options are generally default for a reason -- they are safe and give reasonably good performance under a wide variety of workloads and environments.

                          You mention the discard option. That actually hurts performance with some SSDs, since some SSDs do not behave well when given a large list to TRIM. That is probably why it is not default.

                          Using compression is a very bad choice with many of the benchmarks phoronix runs, since many of the benchmarks are writing streams of zeros, which compress exceedingly well, unlike more realistic data.
                          I'm basing my assertion that some options are preferable both on experience with my own systems doing real tasks and on phoronix' mount options comparisons. Since it looks like the inter-filesystem and intra-filesystem benchmarks run mostly the same tests, it seems like you are implying that Michael's intra-filesystem benches -- the mount option comparisons -- are pretty worthless. Now, I've used just about every tool in the PTS disk suite at some point, and I've written a few hackish benchmarks own my own for specific purposes. I know that each individual test has design biases and that even recording and replaying the disk activity of an end-user system is only reflective of that user.

                          However, one of the values of PTS, to me, is that it provides -- and Michael runs -- a variety of different benchmarks so that a more general insight can be gained. Given the unquestioned bias of benchmarks, it still seems that using the options that are shown to be most effective will be most commonly used. And, as I said, there are obviously differences between cheap flash, rotational media and modern SSDs. Since it seems like almost all FS benchmarks on phoronix are being done with either magnetic disks or modern SSDs, that would give 2 "optimal" sets of mount options, those with SSD optimization and those without.

                          <gone to supper>

                          Comment


                          • #14
                            Most (all?) modern SSDs do compression themselves in hardware. Adding a software compression option therefore should not only cause performance decreases, but give also no advantages in used space. I would disable that on SSDs.

                            Comment


                            • #15
                              Originally posted by Vim_User View Post
                              Most (all?) modern SSDs do compression themselves in hardware.
                              Incorrect. The only common consumer SSDs that do compression are those with a Sandforce controller.

                              Comment

                              Working...
                              X