Announcement

Collapse
No announcement yet.

HDD & SSD File-System Benchmarks On Linux 3.9 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • HDD & SSD File-System Benchmarks On Linux 3.9 Kernel

    Phoronix: HDD & SSD File-System Benchmarks On Linux 3.9 Kernel

    For those curious where the common Linux file-systems stand performance-wise for the Linux 3.9 kernel, here are benchmarks from a solid-state drive and hard drive when comparing the EXT4, Btrfs, XFS, and F2FS file-systems from this yet-to-be-released Linux kernel.

    http://www.phoronix.com/vr.php?view=18573

  • #2
    Apples and oranges?

    Since btrfs and xfs include additional functionality, shouldn't ext4 be tested configured with LVM2 and mdraid to compare equivalents?

    Comment


    • #3
      BTRFS is sometimes slower on an SSD than an HDD? Does that make sense?

      Comment


      • #4
        I stress tested BTRFS for a recent server, and got it to crash dump twice. I seem to be able to trigger it mostly with compression on and writing many small files with rsync. Whatever the case, that ixnayed BTRFS for me... not that I expected different, but it pays to know these things.

        Comment


        • #5
          Originally posted by phoronix View Post
          Phoronix: HDD & SSD File-System Benchmarks On Linux 3.9 Kernel

          For those curious where the common Linux file-systems stand performance-wise for the Linux 3.9 kernel, here are benchmarks from a solid-state drive and hard drive when comparing the EXT4, Btrfs, XFS, and F2FS file-systems from this yet-to-be-released Linux kernel.

          http://www.phoronix.com/vr.php?view=18573
          Ill be honest I onlu read this article for the ext4 vs btrfs comparison. I just did a home server install of F18 and Btrfs and am quite happy with it. Michael the next time you do these benchmarks can you include 2 more setups though? Ext4 overtop LVM to see what the penalty there is. (Also is a more comparable benchmark to btrfs that way). The other is including a btrfs with compress=lzo

          Comment


          • #6
            Originally posted by Shaman666 View Post
            I stress tested BTRFS for a recent server, and got it to crash dump twice. I seem to be able to trigger it mostly with compression on and writing many small files with rsync. Whatever the case, that ixnayed BTRFS for me... not that I expected different, but it pays to know these things.
            Did you write up or check for bug reports? Crashes are bad, yes, but it shouldn't have hosed the entire filesystem since btrfs keeps multiple copies of both metadata and superblock

            Comment


            • #7
              btrfs

              Originally posted by Shaman666 View Post
              I stress tested BTRFS for a recent server, and got it to crash dump twice. I seem to be able to trigger it mostly with compression on and writing many small files with rsync. Whatever the case, that ixnayed BTRFS for me... not that I expected different, but it pays to know these things.
              Make sure you're using the latest kernel (3.9.0-rc3) and btrfs-tools. I've been using btrfs on 2 desktops, a laptop, and a server for 9 months now. The only issue was a bug in the 3.8 series that caused an oops, which was corrected in a month.

              Comment


              • #8
                Another vote to add btrfs with compress=lzo

                Please test btrfs with LZO as that's the common use-case for me and the big sale of BTRFS for most people running SSDs. It's not default, but should be. Also the data would need to make sense for your test goals as /dev/random doesn't resemble all the compressible text, scripts, and ELF that's common in a system (if the test goals resemble system activities).

                Comment


                • #9
                  Am I the only one here excited about F2FS? It might become the default filesystem for the Raspberry Pi when they switch from 3.6 to 3.8.

                  Comment


                  • #10
                    y u no test zfs ?

                    Again no ZFS. Why? Phoronix is the one that got my attention to zfsonlinux, yet it's been ignoring it for a while now? Don't like the developers or what?
                    It's imho the only advanced features fs today that is actually useable, btrfs still feels lightyears away and in the mean time i'm migrating more and more boxes to zfsonlinux.
                    And it looks like it also already compatible with 3.9 so no excuses there.

                    Comment


                    • #11
                      SSD performance

                      I'll be getting an SSD soon so this information was useful. Too bad most distros don't yet support install to F2FS, so it's a slight pain to set up. Does F2FS come with some sort of automatic TRIM enabled or do you have to enable it manually, e.g. using fstrim?

                      Comment


                      • #12
                        Originally posted by molecule-eye View Post
                        I'll be getting an SSD soon so this information was useful. Too bad most distros don't yet support install to F2FS, so it's a slight pain to set up. Does F2FS come with some sort of automatic TRIM enabled or do you have to enable it manually, e.g. using fstrim?
                        AFAIK (correct me if I'm wrong) F2FS is not made for SSDs, it's made for direct access to flash chips. That means it has it's own wear leveling and doesn't need TRIM. (Theoretically) you could change your SSDs firmware to use F2FS internally and your OS could use any FS on top (just like it's done right now: The firmware inside of the SSD uses it's own (proprietary) FS with it's own wear leveling), then you could tell (by writing it into the code) F2FS to react to the TRIM from the FS on top so it knows what's empty and what not.

                        On the other side that would also mean you can install it on a SSD (from the OS, not changing the firmware) and don't need TRIM as the wear leveling is done by F2FS instead (and the garbage collector of the SSD firmware). In theory this should be good enough for the SSD but practically I can't tell. In the worst case (unlikely) the garbage collector may interfere badly with the wear leveling of F2FS.
                        Last edited by V10lator; 03-28-2013, 02:16 PM.

                        Comment

                        Working...
                        X