Announcement

Collapse
No announcement yet.

Another Look At The Bcachefs Performance on Linux 6.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Quackdoc
    replied
    Originally posted by waxhead View Post

    I understand what you mean, but I still think it would be valuable to see how each filesystem copes if running on "ideal hardware" without any latency introduced by the hardware itself (SSD/HDD). I agree that your workload is the only thing that matters, but why exactly do you think it is useless? If filesystem X takes 10 seconds doing operation A and filesystem Y only uses 5 seconds it tells something about the efficiency of the algorithm being used. Yes, the results may be completely reversed on real hardware depending on access patterns, but since we are moving more and more towards storage devices that mostly are "regular RAM" I think it would be a good test still. I would love to hear more about why you think such a test is useless?!
    I don't think a test on ram is "useless" but I don't think there is too much value in it either aside from "huh that's neat". Not to mention it there is little point in actually optimizing for a ram only situation (there is some don't get me wrong) for the vast majority of filesystems, so they would not be indicative of the "best preformance" anyways.

    I do think test's in isolation are useless because they will never be indicative of what a filesystem will actually be put through, there are thousands upon thousands of potential variables

    Leave a comment:


  • waxhead
    replied
    Originally posted by Quackdoc View Post

    I strongly disagree with this, one of the jobs a high preformance filesystem may need to do is cope with the idiosyncrasies of the drive(s) and potential setups, SMR, Raid, eMMC, Nand etc. I would say that in isolation, any test is useless.
    I understand what you mean, but I still think it would be valuable to see how each filesystem copes if running on "ideal hardware" without any latency introduced by the hardware itself (SSD/HDD). I agree that your workload is the only thing that matters, but why exactly do you think it is useless? If filesystem X takes 10 seconds doing operation A and filesystem Y only uses 5 seconds it tells something about the efficiency of the algorithm being used. Yes, the results may be completely reversed on real hardware depending on access patterns, but since we are moving more and more towards storage devices that mostly are "regular RAM" I think it would be a good test still. I would love to hear more about why you think such a test is useless?!

    Leave a comment:


  • hyperchaotic
    replied
    Originally posted by cj.wijtmans View Post

    unfortunately I also use systemd on gentoo. I just dont like sysvinit bash scripts although its more flexible, readable and configurable i just dont have enough experience with bash script to make a proper init script for the life of me. I wish systemd was less of a monolithic beast and more of a basic init system. Another terrible thing is that journald assumes something to be byte data when a log line contains a lot of numbers 🤷🏼‍♂️.
    And by a monolithic beast you mean a highly modular init and runtime management/admin system comprising many optional components each with their own purpose.

    Leave a comment:


  • rommyappus
    replied
    Originally posted by vermaden View Post
    Why no ZFS also included in the tests?

    Especially knowing that the tests were made on Ubuntu where ZFS is available ...
    There's a good chance that openzfs isn't built against the latest kernel used here. they seem to be a few versions behind on the regular. But I couldn't find where to confirm this.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by waxhead View Post
    Personally i think filesystem tests should be performed on a block device in ram. 32gb ram is not uncommon these days and with 64 or even 128gb of ram it should be possible to set up a test without relying on a physical HDD or a SSD with all the randomness the hardware introduced. That way you would see the true differences between the filesystems theoretical performance on "ideal" hardware. Only then testing on real hardware becomes relevant in my opinion.
    I strongly disagree with this, one of the jobs a high preformance filesystem may need to do is cope with the idiosyncrasies of the drive(s) and potential setups, SMR, Raid, eMMC, Nand etc. I would say that in isolation, any test is useless.

    Leave a comment:


  • waxhead
    replied
    Personally i think filesystem tests should be performed on a block device in ram. 32gb ram is not uncommon these days and with 64 or even 128gb of ram it should be possible to set up a test without relying on a physical HDD or a SSD with all the randomness the hardware introduced. That way you would see the true differences between the filesystems theoretical performance on "ideal" hardware. Only then testing on real hardware becomes relevant in my opinion.

    Leave a comment:


  • clipcarl
    replied
    Originally posted by Berniyh View Post
    btrfs, so far, has not yet failed me in roughly the last 10 years.
    Wow. I guess probability suggests there are likely to be **some** unicorns.

    Leave a comment:


  • sdack
    replied
    As much as I appreciate another look at Bcachefs do I mistrust the Corsair MP700 drive that was used here. The drive apparently works well with sequential operations, but lacks significantly behind in random operations as tests have previously shown by Phoronix. So I still wonder how Bcachefs compares to other filesystems when used on other drives?

    Leave a comment:


  • mrg666
    replied
    I am not saying just use ext4, the others are unnecessary. Actually, I am itching to try the new options. I just checked the results and could not justify switching .... again.

    Leave a comment:


  • LtdJorge
    replied
    Originally posted by Berniyh View Post
    Well, at least that has improved a lot, mainly due to systemd.
    systemd, whether you like it or not, has led to a huge standardization in many areas of the core system.
    Yeah, that's what I'm saying

    Leave a comment:

Working...
X