Announcement

Collapse
No announcement yet.

Linux 4.14 File-System Benchmarks: Btrfs, EXT4, F2FS, XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    yeah, btrfs is slow in almost all test scenarios but blazing fast in real life! Is your tinfoil hat on?

    Comment


    • #12
      Michael, can you add BCacheFS to the list?

      Comment


      • #13
        Originally posted by pal666 View Post
        actually it was fastest on two tests and same on third one
        it is slow on non-real-life tests
        it does cow, so you disable cow on databases or vms
        BTRFS is slower on writes and often faster on reads which is consistent with it's design (COW) and major feature (Checksumming). It's of course also slower on real-life tests, it's just that few desktop users perform as much IO as Michael does in these tests so the differences will feel much less there. If you move around PB of data on regular basis (as I do) then the differences are quite real, the benefits of the checksumming however outweigh the performance loss for me so BTRFS it is.

        Comment


        • #14
          why still avoiding ZFS
          it could be very cool to see how ZFS on linux do especialy on raid settings like raid1 or raid10
          comparaison to raid BTRFS, raid ZFS and mdadm raid + ext4/XFS..

          Comment


          • #15
            Originally posted by F.Ultra View Post
            If you move around PB of data on regular basis (as I do) then the differences are quite real, the benefits of the checksumming however outweigh the performance loss for me so BTRFS it is.
            Indeed. Btrfs is noticeably slower than ext4 in real life, but it has features that makes it faster in use than any other filesystem in the kernel.
            Like snapshots/clones. If you use btrfs as backing store for dirvish, it can automagically clone and make new backups, and just remove old snapshots "on-the-fly".
            Removing a snapshot will still take a few hours, but it will be backgrounded, whereas an rm -fr of a tree on ext4 can take more than 24 hours, just decreasing link counts.
            The lack of stability in high I/O pressure related loads prevented me to use it on anything else but backups of backups, as data loss due to btrfs self corruption was always imminent. Of course that was back in 2014. It might have changed in the mean time.
            And I always remember the time that a btrfs.fsck took 7 months, with 16GB of ram, 540GB of swap space, only to end in an out-of-memory. The meta data it was working on was 250GB, the total diskspace in use was 2T of the 8T or was it an array of 16T
            I think the lack of stability is also the reason why it was kicked from virtualization solutions.
            I hope someday it will be stable again. The fact that it can self scrub and test it's own data with the metadata checksums is pretty important in CDN kind of setups, where you have redundancy over multiple nodes, but to keep the cost down by having only one disk per node.

            Comment


            • #16
              Originally posted by gadnet View Post
              why still avoiding ZFS
              it could be very cool to see how ZFS on linux do especialy on raid settings like raid1 or raid10
              comparaison to raid BTRFS, raid ZFS and mdadm raid + ext4/XFS..
              My best bet is that you have to compile ZFS yourself and Michael favors easy to automate tests.

              Comment


              • #17
                Originally posted by JustRob View Post
                Michael, can you add BCacheFS to the list?
                Also for those fans of ZFS is Lustre: http://lustre.org/download/ .

                NextPlatform wrote an article about Lustre on ZFS claiming huge performance increases: https://www.nextplatform.com/2017/01...ntinuing-work/ .

                Sometimes the increase is a measly 2x or 10x but there's a 80X Faster RAIDZ3 Reconstruction.

                This open source software has Intel backing with binaries available on their site for several distributions: https://wiki.hpdd.intel.com/display/PUB/Lustre+Releases .

                For those whom enjoy studying there's complete instructions for rolling your own: http://wiki.lustre.org/Compiling_Lustre .

                Incorporating both BCacheFS (which Michael is familiar with) and Lustre would provide two filesystems with the newest implementation of ZFS and it's features to add to the testing mix.

                Comment


                • #18
                  As per XFS wiki, consider changing the default CFQ I/O scheduler (for example to Deadline, Noop or BFQ) to enjoy all of the benefits of XFS, especially on SSDs.

                  Comment

                  Working...
                  X