Announcement

Collapse
No announcement yet.

Benchmarking The Experimental Bcachefs File-System Against Btrfs, EXT4, F2FS, XFS & ZFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Raka555 View Post
    Nice work.
    I assume these test are started with a freshly formated partition ?
    Would be interesting to see how they would compare if you run the tests on a filesystems that is say 80% full.
    I'd also like to see basic disk array tests, say a 4 or 6 disk stripe, mirror, stripe+mirror setups. At a minimum between ZFS, Bcachefs, BTRFS. Sometime, I understand these take a good bit of setup/execution time.

    BTW, loved the "xfs ran too fast to measure" on the 900p, LOL.

    Comment


    • #12
      [Edit: seems to be a common thought these days.]

      Can we stop benchmarking these filesystems using a single SSD?

      At the very least, it should be tested using a mirror (RAID1). At best, it should be tested with a series of different pool layouts (single mirror, dual mirror, triple mirror, single-parity, double-parity, triple-parity; with those last 3 being done with 1, 2, and 3 separate devices for a RAID50/RAID60/RAID70 setup).

      Nobody really cares how these next-generation storage systems work on single device pools. Running them on a single device pool eliminates 90% of their useful features.

      And stop with the "default mount options" cop-out. Test different configurations, with different options, to see how the options affect different workloads. Compression types, block sizes, etc.
      Last edited by phoenix_rizzen; 25 June 2019, 04:15 PM.

      Comment


      • #13
        I hope GRUB 2.03 will be released soon, and we see an option to install our favorite distribution on a F2FS root fs. EXT4 is still kinda useful for HDDs, but F2FS ist just better with access times, basically the most important thing for a root fs on a desktop system.

        Comment


        • #14
          Originally posted by phoenix_rizzen View Post
          [Edit: seems to be a common thought these days.]

          Can we stop benchmarking these filesystems using a single SSD?

          At the very least, it should be tested using a mirror (RAID1). At best, it should be tested with a series of different pool layouts (single mirror, dual mirror, triple mirror, single-parity, double-parity, triple-parity; with those last 3 being done with 1, 2, and 3 separate devices for a RAID50/RAID60/RAID70 setup).

          Nobody really cares how these next-generation storage systems work on single device pools. Running them on a single device pool eliminates 90% of their useful features.

          And stop with the "default mount options" cop-out. Test different configurations, with different options, to see how the options affect different workloads. Compression types, block sizes, etc.
          Michael does good work, and a lot of it. Testing multiple configurations takes time, adds variables, and still wouldn't satisfy everyone else's "what about.." questions. He is generous enough to put his benchmarking software out there for everyone, like a true libre gentleman, so if people care enough they can do the benchmarks themselves.

          I've always used XFS with my beloved Fedora installs for the simple reason that it is what I used in the classroom when I was learning on CentOS. I went to Ubuntu for a short minute and it did not work as well though (something about XFS on the /boot partition). I've tried btrfs and it seemed fine enough but I was worried about stability. I started trying ZoL and it was a touch complicated to get set up at the time and I needed an OS right then so I settled for EXT4 until I got my Fedora back up and running.

          Comment


          • #15
            Amazing for such a new filesystem.

            Also, I just want to quote this one line:

            Originally posted by phoronix
            The XFS result was too quick to accurately record from this Optane 900p drive.
            Legendary performance from a legendary file system. The C language of filesystems.

            Comment


            • #16
              The initial two application launch tests are showing ZFS's ARC at work.. benchmarks that are designed to be intentionally random will be a lot slower on ZFS. For normal usage it *needs* it's ARC to function right. If you defeat the ARC then ZFS is going to suck with all the passes it makes over the data. Data integrity comes at a price so everyone get in the ARC it's raining!

              Bcache FS looks pretty good, maybe a candidate for a desktop system. I'm concerned about it's stability still but it has none of the warning flags btrfs had so that is a good sign. Keep up the good work!
              Last edited by k1e0x; 26 June 2019, 04:04 PM.

              Comment


              • #17
                Originally posted by phoenix_rizzen View Post
                Nobody really cares how these next-generation storage systems work on single device pools. Running them on a single device pool eliminates 90% of their useful features.
                I do. I want to be able to format USB sticks with ZFS. (why not.. unreliable media with a filesystem that can be read by Linux, BSD, Solaris, Mac OS X, and Windows now? sounds like a good fit to format it with a cross platform filesystem that will actually protect my data and tell me when it's corrupt)

                I already do with with single backup removable drives. (cold storage)
                Last edited by k1e0x; 25 June 2019, 07:32 PM.

                Comment


                • #18
                  XFS held up well in these. SGI clearly had some really smart people working for them. From these benchmarks, it seems like XFS might be a good choice for server work-loads with lots of random activity.

                  What I don't understand is why ZFS still has such a dedicated following. I'm not claiming they are wrong, I just am confused. If anyone should be backing ZFS, I think it should be Oracle and yet Oracle Linux seems to prefer btrfs. Oracle Linux does not officially offer ZFS support at all. In terms of LInux file systems, Oracle seems to be putting their resources into both btrfs and OCFS2 while also being willing to supply both under the GPL making the license compatible with the Linux kernel. If we are going to continue to look at ZFS as an option, can someone please get Oracle on board with it?

                  Comment


                  • #19
                    The reason bcachefs was slow on random reads is because of the way it does data checksumming - it stores checksums at the granularity of extents, which reduces metadata size and is a good trade off for most applications. At some point I do need to add an option to force storing checksums at a smaller granularity. For now, it would be good to see results testing bcachefs with data checksums off for that test - it should be comparable to ext4/xfs then.

                    I suspect that's why the pgbench numbers are low too.

                    dbench is probably slower because of lock contention on the inodes btree. I have code that adds a write cache for inode updates that drastically improves that, but it's off by default because of interactions with the journal that still need tuning.

                    Always more to do...

                    Comment


                    • #20
                      I'm not sure that Oracle's top brass management is still aware of Solaris existence in their portfolio

                      Comment

                      Working...
                      X