Announcement

Collapse
No announcement yet.

10-Way Linux File-System Comparison On Linux 3.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by justinzane View Post
    Other than avoiding flame wars over the "generally used" options, why not?
    The default options are generally default for a reason -- they are safe and give reasonably good performance under a wide variety of workloads and environments.

    You mention the discard option. That actually hurts performance with some SSDs, since some SSDs do not behave well when given a large list to TRIM. That is probably why it is not default.

    Using compression is a very bad choice with many of the benchmarks phoronix runs, since many of the benchmarks are writing streams of zeros, which compress exceedingly well, unlike more realistic data.

    Comment


    • #12
      Suggest using Common/Optimal Mount Options

      I, and I presume most other Linux sys admins, have what I call call "common" mount options for various file systems that we use. As an example, I almost always mount rotational btrfs filesystems with `relatime,compress=lzo,space_cache` and add `,ssd,discard` for SSDs. I'm sure other more experienced users have "common" settings like this for every filesystem they regularly use. Additionally, since phoronix/openbenchmarking seems oriented towards FOSS users who are sophisticated and interested enough to read benchmarks; it follows that most of us use the results of you mount option comparisons as a guide when we setup systems.

      Therefore, I suggest determining, either through results of your filesystem specific tests or via poll of users, what mount options are preferred for general use for each filesystem. Note that I say general use since corner cases like the enterprise master LDAP server have different requirements than other specific systems like streaming CND hosts. That way, your inter-filesystem benchmarks will be more representative both of the best performance the FS can give and the common user experience.

      Hope that makes sense, and thanks!

      Comment


      • #13
        Confused...

        Originally posted by jwilliams View Post
        The default options are generally default for a reason -- they are safe and give reasonably good performance under a wide variety of workloads and environments.

        You mention the discard option. That actually hurts performance with some SSDs, since some SSDs do not behave well when given a large list to TRIM. That is probably why it is not default.

        Using compression is a very bad choice with many of the benchmarks phoronix runs, since many of the benchmarks are writing streams of zeros, which compress exceedingly well, unlike more realistic data.
        I'm basing my assertion that some options are preferable both on experience with my own systems doing real tasks and on phoronix' mount options comparisons. Since it looks like the inter-filesystem and intra-filesystem benchmarks run mostly the same tests, it seems like you are implying that Michael's intra-filesystem benches -- the mount option comparisons -- are pretty worthless. Now, I've used just about every tool in the PTS disk suite at some point, and I've written a few hackish benchmarks own my own for specific purposes. I know that each individual test has design biases and that even recording and replaying the disk activity of an end-user system is only reflective of that user.

        However, one of the values of PTS, to me, is that it provides -- and Michael runs -- a variety of different benchmarks so that a more general insight can be gained. Given the unquestioned bias of benchmarks, it still seems that using the options that are shown to be most effective will be most commonly used. And, as I said, there are obviously differences between cheap flash, rotational media and modern SSDs. Since it seems like almost all FS benchmarks on phoronix are being done with either magnetic disks or modern SSDs, that would give 2 "optimal" sets of mount options, those with SSD optimization and those without.

        <gone to supper>

        Comment


        • #14
          Most (all?) modern SSDs do compression themselves in hardware. Adding a software compression option therefore should not only cause performance decreases, but give also no advantages in used space. I would disable that on SSDs.

          Comment


          • #15
            Originally posted by Vim_User View Post
            Most (all?) modern SSDs do compression themselves in hardware.
            Incorrect. The only common consumer SSDs that do compression are those with a Sandforce controller.

            Comment


            • #16
              Great that JFS appears in a test

              Over the years I've become pretty fond of JFS, which has never let me down, so it's great to see it included in this test. I wish Phoronix would include it in the other filesystem tests run from time to time. It may not be the latest thing, but it's solid, and always appears right up there in comparison tests like this. It's a pity that RedHat (and Fedora) SuSE (and OpenSuSE) make it difficult-to-impossible to install from scratch using JFS, but at least Debian has retained it as an option.

              I find JFS great on KVM guests, especially with the noop scheduler.

              While XFS is good too, especially for larger files, I was responsible for systems during the dreaded file corruption days if a filesystem wasn't shut down cleanly, now a forgotten episode, but that little sense of mistrust still remains long after the issue was resolved. I also have horrible memories of piecing together an EXT4 system from the lost+found folder, and lost an entire resier3 filesystem once, on a system running on another continent!

              JFS has been great on lightweight systems too. I run one server at an off-grid location, where power consumption is a significant issue, not just an ideal. JFS is known to be frugal on processor demand, and squeezes good capacity from small disks too.

              Any chance of Phoronix repeating that seminal 2007 file system comparison test done on Debian?

              Comment


              • #17
                Use noatime instead of relatime

                Think of it.

                Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.

                This is why i always use noatime. Almost nothing (mutt?) needs to know when was the last time a file was read, and the performance loss is not neglible, not to mention adding wear to flash media.

                Comment


                • #18
                  Thanks, but...

                  Originally posted by Artemis3 View Post
                  Think of it.

                  Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.

                  This is why i always use noatime. Almost nothing (mutt?) needs to know when was the last time a file was read, and the performance loss is not neglible, not to mention adding wear to flash media.
                  Thanks, you are probably right; but, that is thoroughly beside my mount. I am **not** suggesting that my options are optimal or the most commonly used. I'm just suggesting that it is a good idea to bnechmark whatever options **are** optimal/most common. Determining that seems to be something that Michael does regularly anyway.

                  Comment


                  • #19
                    Originally posted by Artemis3 View Post
                    Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.
                    Incorrect. relatime only updates the access time if it is earlier than the last mtime/ctime. For example, with relatime, if a file is modified and then read, there will be a write to update the access time. If the file is read again, there will be no more writes to update the access time (until the file is modified again).

                    Comment


                    • #20
                      Efficient for Semi-Static Files, Numbers?

                      Originally posted by jwilliams View Post
                      Incorrect. relatime only updates the access time if it is earlier than the last mtime/ctime. For example, with relatime, if a file is modified and then read, there will be a write to update the access time. If the file is read again, there will be no more writes to update the access time (until the file is modified again).
                      My understanding is that this, `relatime`'s behaviour as you describe, is of great utility for files that are rarely updated but frequently read. That would be files like the contents of /usr, /bin, /etc as well as image archives, music archives, etc. Again, as I understand it, `relatime` is pretty much useless for frequently updated files like those in /var. There are a ton of articles referencing "relatime write reduction" via Google, but seemingly none that have actual test/benchmark data on how much reductions is typical in various environments. Though this is now quite tangential to the original topic, you wouldn't happen to know of anywhere that has data on the effects of <none> vs `relatime` vs `noatime`, would you?

                      Comment

                      Working...
                      X