Announcement

Collapse
No announcement yet.

Another Look At The Bcachefs Performance on Linux 6.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mrg666 View Post
    There is this article that I found enlightening about those "advanced" file systems. Ext4 (and previous revisions) never failed me in more than 25 years of Linux use. I would probably try ZFS, XFS, btrfs, bcachefs, etc if I were running a server with very heavy I/O load on the disk for reliability. But ext4 is just fine for a personal development workstation.
    Well, it failed me 3 times in the last 10-15 years. btrfs, so far, has not yet failed me in roughly the last 10 years. So now what?
    And, don't ever think one of those advanced file systems can be better than regular backups.
    Of course, nothing can actually repace backups other than more backups.
    But, as an ext4 user, how do you even know that you have faulty data and that you need your backup?
    tbh, (data) checksumming support should be a standard thing in filesystems today.

    (Edit: and yes, that's what happened to me. Faulty data I noticed by pure coincidence.)

    Comment


    • #32
      Originally posted by mrg666 View Post
      There is this article that I found enlightening about those "advanced" file systems. Ext4 (and previous revisions) never failed me in more than 25 years of Linux use. I would probably try ZFS, XFS, btrfs, bcachefs, etc if I were running a server with very heavy I/O load on the disk for reliability. But ext4 is just fine for a personal development workstation. And, don't ever think one of those advanced file systems can be better than regular backups.
      I use ZFS to cover random disk failures. All the other extra features are just icing, sprinkles, pecans and coconut shreds on the cake. One disk going bad is likely. Two going bad at the same time isn't that likely. Three going bad at the same time is astronomically not likely. Anyhoo, ZFS has covered my ass more than once over the past 10 years when that one disk going bad happened. One disk dies but the mirror or raidz keeps on keeping on until Newegg or Amazon ships me a new disk. It works for me and my disk to mirror to raidz has traveled from 4 or 5 PCs now and is basically the RAIDZ of Theseus.

      Comment


      • #33
        Originally posted by Berniyh View Post
        Well, at least that has improved a lot, mainly due to systemd.
        systemd, whether you like it or not, has led to a huge standardization in many areas of the core system.
        Yeah, that's what I'm saying

        Comment


        • #34
          I am not saying just use ext4, the others are unnecessary. Actually, I am itching to try the new options. I just checked the results and could not justify switching .... again.

          Comment


          • #35
            As much as I appreciate another look at Bcachefs do I mistrust the Corsair MP700 drive that was used here. The drive apparently works well with sequential operations, but lacks significantly behind in random operations as tests have previously shown by Phoronix. So I still wonder how Bcachefs compares to other filesystems when used on other drives?

            Comment


            • #36
              Originally posted by Berniyh View Post
              btrfs, so far, has not yet failed me in roughly the last 10 years.
              Wow. I guess probability suggests there are likely to be **some** unicorns.

              Comment


              • #37
                Personally i think filesystem tests should be performed on a block device in ram. 32gb ram is not uncommon these days and with 64 or even 128gb of ram it should be possible to set up a test without relying on a physical HDD or a SSD with all the randomness the hardware introduced. That way you would see the true differences between the filesystems theoretical performance on "ideal" hardware. Only then testing on real hardware becomes relevant in my opinion.

                http://www.dirtcellar.net

                Comment


                • #38
                  Originally posted by waxhead View Post
                  Personally i think filesystem tests should be performed on a block device in ram. 32gb ram is not uncommon these days and with 64 or even 128gb of ram it should be possible to set up a test without relying on a physical HDD or a SSD with all the randomness the hardware introduced. That way you would see the true differences between the filesystems theoretical performance on "ideal" hardware. Only then testing on real hardware becomes relevant in my opinion.
                  I strongly disagree with this, one of the jobs a high preformance filesystem may need to do is cope with the idiosyncrasies of the drive(s) and potential setups, SMR, Raid, eMMC, Nand etc. I would say that in isolation, any test is useless.

                  Comment


                  • #39
                    Originally posted by vermaden View Post
                    Why no ZFS also included in the tests?

                    Especially knowing that the tests were made on Ubuntu where ZFS is available ...
                    There's a good chance that openzfs isn't built against the latest kernel used here. they seem to be a few versions behind on the regular. But I couldn't find where to confirm this.

                    Comment


                    • #40
                      Originally posted by cj.wijtmans View Post

                      unfortunately I also use systemd on gentoo. I just dont like sysvinit bash scripts although its more flexible, readable and configurable i just dont have enough experience with bash script to make a proper init script for the life of me. I wish systemd was less of a monolithic beast and more of a basic init system. Another terrible thing is that journald assumes something to be byte data when a log line contains a lot of numbers 🤷🏼‍♂️.
                      And by a monolithic beast you mean a highly modular init and runtime management/admin system comprising many optional components each with their own purpose.

                      Comment

                      Working...
                      X