Announcement

Collapse
No announcement yet.

Linux 4.0 SSD EXT4 / Btrfs / XFS / F2FS Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 4.0 SSD EXT4 / Btrfs / XFS / F2FS Benchmarks

    Phoronix: Linux 4.0 SSD EXT4 / Btrfs / XFS / F2FS Benchmarks

    A few days ago I ran some fresh hard drive file-system benchmarks on Linux 4.0 and today those results are being complemented by the solid-state drive results. Tested on the SSD were the popular EXT4, Btrfs, XFS, and F2FS file-systems.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    F2FS fail?

    And i thought that F2FS was the shiznit for flash storage...

    Comment


    • #3
      I really think you should add lzo compression to your btrfs tests, that's one of the main reasons why I moved to it (I was on r4 before and loved it).
      I know it is not the standard mount offered by distros, but I still believe it to be a great option (and looking forward to official lz4)

      Comment


      • #4
        Originally posted by geearf View Post
        I really think you should add lzo compression to your btrfs tests, that's one of the main reasons why I moved to it (I was on r4 before and loved it).
        I know it is not the standard mount offered by distros, but I still believe it to be a great option (and looking forward to official lz4)
        Wouldn't compression lead to unnecessary writes on SSD's?

        Comment


        • #5
          While compression may make sense in some use cases, in many cases it doesn't, for example if your SSD controller does compression in hardware, or if you also use the filesystem for storing already compressed data, like audio files, video files, images, a package repository. Enabling compression for these benchmarks gives you no adavantage, if compression may have an advantage for you can only be determined for your individual case.

          Comment


          • #6
            Originally posted by nanonyme View Post
            Wouldn't compression lead to unnecessary writes on SSD's?

            Why would that be the case?
            I could see unnecessary reads but not writes.

            Originally posted by MoonMoon View Post
            While compression may make sense in some use cases, in many cases it doesn't, for example if your SSD controller does compression in hardware, or if you also use the filesystem for storing already compressed data, like audio files, video files, images, a package repository. Enabling compression for these benchmarks gives you no advantage, if compression may have an advantage for you can only be determined for your individual case.
            Why would it be bad to turn compression (and not force it) on a mix-use filesystem?
            As for hardware, I didn't know they started to do that. How does it compare to software compression?

            Thanks!

            Comment


            • #7
              The test machine has an insane amount of memory. <troll>The big reason why btrfs performs so well.</troll>
              I would love to see these tests with mem=64 option and mem=512, and no swap. Equally important to filesystem speed is the amount of memory needed to make a filesystem that fast.
              I think btrfs would be the first to succumb to the memory pressure, maybe even a lot of new bugs can be found.
              Next probably xfs.
              And then either ext4 or f2fs.
              Although btrfs should be have a lowered average memory usage when using btrfs snapshots in an lxc/vserver environment, or even without snapshots with batched dedups.

              Comment


              • #8
                Originally posted by geearf View Post
                Why would it be bad to turn compression (and not force it) on a mix-use filesystem?
                How would you not force it? AFAIK, compression is either on or off, but not chosen to be on or off for every file separately. Enabling compression for a filesystem that is used for already compressed data only leads to unnecessary overhead when the filesystem tries to compress the data.

                Comment


                • #9
                  Originally posted by Ardje View Post
                  The test machine has an insane amount of memory. <troll>The big reason why btrfs performs so well.</troll>
                  I would love to see these tests with mem=64 option and mem=512, and no swap. Equally important to filesystem speed is the amount of memory needed to make a filesystem that fast.
                  I think btrfs would be the first to succumb to the memory pressure, maybe even a lot of new bugs can be found.
                  Next probably xfs.
                  And then either ext4 or f2fs.
                  Although btrfs should be have a lowered average memory usage when using btrfs snapshots in an lxc/vserver environment, or even without snapshots with batched dedups.
                  16GB is insane? Give the use case of most Linux installation (VM hosts, servers, workstations) even 64GB is not much. (This test simulates host, not guests).
                  Most of the servers around me have >= 128GB RAM (some 512/1TB) and most of the workstations I use have 32GB or more.

                  Plus, keep in mind that the size of memory is only relevant as the benchmark data increases.

                  - Gilboa
                  oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                  oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                  oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                  Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                  Comment


                  • #10
                    Originally posted by gilboa View Post
                    16GB is insane? Give the use case of most Linux installation (VM hosts, servers, workstations) even 64GB is not much. (This test simulates host, not guests).
                    Most of the servers around me have >= 128GB RAM (some 512/1TB) and most of the workstations I use have 32GB or more.

                    Plus, keep in mind that the size of memory is only relevant as the benchmark data increases.

                    - Gilboa
                    Yeah, imo these days you at minimum want 8G for a new machine with slots for 32G. And yes, most VM hosts I see these days have over a hundred gigabytes as well and typically this still leads to cache suffocation considering how much VM's eat

                    Comment

                    Working...
                    X