Announcement

Collapse
No announcement yet.

Linux 3.12 Kernel To Bring Faster File-Systems

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 3.12 Kernel To Bring Faster File-Systems

    Phoronix: Linux 3.12 Kernel To Bring Faster File-Systems

    With the Linux 3.12 kernel due for release in several weeks time but all major changes behind us now, here are some file-system tests from this forthcoming kernel update. Tested Linux file-systems for this Phoronix article include EXT4, Btrfs, XFS, and F2FS. From these results, there are multiple instances of these file-systems running measurably faster than Linux 3.11.

    http://www.phoronix.com/vr.php?view=19164

  • #2
    I loved if Phoronix put a evolution chart of the filesystem in all kernel 3.xx version.

    tks
    Joao

    Comment


    • #3
      Originally posted by jmartins View Post
      I loved if Phoronix put a evolution chart of the filesystem in all kernel 3.xx version.

      tks
      Joao
      That's rather time intensive with very little ROI unless there's tons of interest/requests.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        Compression comparison please.

        I've been wanting a benchmark on transparent compression, but I just realized that as awesome as it seems, I'm not getting much out of it. I only get 1.2 compressratio for a lz4 compressed ZFS volume containing 186GiB of games, the problem is that most files are pre-compressed. Another one of those "if you need it, you'll know it" features.

        Comment


        • #5
          Once again phoronix has made the mistake of running file system benchmarks (of which many just write unrealistic streams of zeros) on a Sandforce SSD with compression. I'm not sure what you are measuring when you do that, but it is not realistic performance. Unless you modify all of your benchmarks to use real data (instead of streams of zeros), you should never use a Sandforce SSD to run the storage benchmarks.

          Also, as has been mentioned before, the first benchmark, the fio Intel IOMeter fileserver access pattern, really needs to be compared with a 4K-aligned version of the same test, since non-4K-aligned data is rare and not representative of most workloads. Just add "ba=4k" to the fio test script for that benchmark and run it again. You could always report both results, or you could just migrate to the ba=4k version, since that is probably the most representative.

          Comment


          • #6
            Originally posted by Michael, from the article
            We'll continue exploring the Linux 3.12 kernel performance and trying out these file-system tests on traditional hard drives and other environments to see if these performance improvements persist.
            Maybe I'm one of the few people still using HDDs, but I would like to see if there any gains for these "ancient" drives ?
            Waiting for your next benchmark Michael

            Comment


            • #7
              Originally posted by chinoto View Post
              I've been wanting a benchmark on transparent compression, but I just realized that as awesome as it seems, I'm not getting much out of it. I only get 1.2 compressratio for a lz4 compressed ZFS volume containing 186GiB of games, the problem is that most files are pre-compressed. Another one of those "if you need it, you'll know it" features.
              I'm assuming youre referring to Btrfs Btrfs won't compress something it already knows is compressed, it just skips over it. But if you still want it to try, there is a mount option for 'forced compression.' You might have better luck with that.

              Comment


              • #8
                Seconded, please test spinning rust drives as well

                Originally posted by dietrdan View Post
                Maybe I'm one of the few people still using HDDs, but I would like to see if there any gains for these "ancient" drives ?
                Waiting for your next benchmark Michael
                What is more, it would be interesting to see which filesystems perform well on SSDs, and which ones perform well on old spinning HDDs. Different filesystems- different optimization topics. Also, transparent compression in SSD controller will skew test results unless you are writing random data.

                Comment


                • #9
                  Originally posted by jwilliams View Post
                  Once again phoronix has made the mistake of running file system benchmarks (of which many just write unrealistic streams of zeros) on a Sandforce SSD with compression. I'm not sure what you are measuring when you do that, but it is not realistic performance. Unless you modify all of your benchmarks to use real data (instead of streams of zeros), you should never use a Sandforce SSD to run the storage benchmarks.

                  Also, as has been mentioned before, the first benchmark, the fio Intel IOMeter fileserver access pattern, really needs to be compared with a 4K-aligned version of the same test, since non-4K-aligned data is rare and not representative of most workloads. Just add "ba=4k" to the fio test script for that benchmark and run it again. You could always report both results, or you could just migrate to the ba=4k version, since that is probably the most representative.
                  I would rather say do not perform benchmark tests alone to judge your SSDs. My analysis are justifiable by real tests on all kinds of SSDs including sandforce ones. Sandforce ones perform quite good in comparision to any other SSDs

                  Comment

                  Working...
                  X