Announcement

Collapse
No announcement yet.

Another Look At The Bcachefs Performance on Linux 6.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • cynic
    replied
    Originally posted by andyprough View Post

    I don't believe I was showing off, not sure how that would work when I'm asking you a question. You seem to be the one trying to show off some sort of superior knowledge, but my probing question has apparently revealed that you do, in fact, have zero experience with the problem you are scare-mongering about. As expected. Since you are such an expert on 'bitrot and ext4' searches, I'm sure you realize that there are no reports of it actually occurring with ext4. There are quite a few articles questioning whether bitrot is a real phenomenon at all, or just a conspiracy.
    after reading that "bitrot is a conspiracy" I shouldn't be here wasting my time for you.
    still, today I'm good and here I am.

    bitrot occurs for several reasons and with today large storage is almost inevitable.
    as I already wrote you, it is not related to ext4, nor any other particular filesystem implementation, it is a physical phenomenon that regards memory storage.

    ext4 is not able to detect bitrot because don't make a checksum of data (only on metadata) so your data are slowly and silently degrading and degraded data are probably going in your backups, overwriting good data.

    ZFS and btrfs on the contrary do data checksumming and can detect (and if you have redundancy, fix) corruptions.

    don't know what search engine you're using, but one of the first results I get on the topic is this interesting article from 2014: https://arstechnica.com/information-...n-filesystems/

    Leave a comment:


  • flower
    replied
    Originally posted by F.Ultra View Post

    Zero of those articles can be serious. Bitrot is a physical phenomenon that is no mystery at all, bits on the drive are not carved in stone (and even things carved in stone experiences bitrot eventually). There is nothing in other filesystems, like EXT4, that detects bitrot. Ofc bitrot is extremely rare since HDDs and SSDs don't store bits as such but instead use various forms of error correcting codes but if people are now claiming that there have never been unrecoverable files on storage media then I have more than one bridge to sell to those.
    it's not only bitrot. I once had a sata cable going bad after a year. I only noticed it through zfs checksum errors.

    Bad sata cables and checksum errors are so common that this is always the first advice someone get when asking for checksum errors in zfs reddit

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by andyprough View Post

    I don't believe I was showing off, not sure how that would work when I'm asking you a question. You seem to be the one trying to show off some sort of superior knowledge, but my probing question has apparently revealed that you do, in fact, have zero experience with the problem you are scare-mongering about. As expected. Since you are such an expert on 'bitrot and ext4' searches, I'm sure you realize that there are no reports of it actually occurring with ext4. There are quite a few articles questioning whether bitrot is a real phenomenon at all, or just a conspiracy.
    Zero of those articles can be serious. Bitrot is a physical phenomenon that is no mystery at all, bits on the drive are not carved in stone (and even things carved in stone experiences bitrot eventually). There is nothing in other filesystems, like EXT4, that detects bitrot. Ofc bitrot is extremely rare since HDDs and SSDs don't store bits as such but instead use various forms of error correcting codes but if people are now claiming that there have never been unrecoverable files on storage media then I have more than one bridge to sell to those.

    Originally posted by phoenix_rizzen View Post
    Just curious why BcacheFS block size is set to 512 bytes when all the other filesystems are set to 4096 bytes? Wouldn't it make sense to set them all to the same block size? And, really, 4096 bytes should be the minimum block size for any filesystem to allow for easier migration to Advanced Format drives going forward (for those still using spinning rust for storage).
    ‚Äč
    Since Michael used the default setting it looks like bcachefs reads the disk wrong (some ssd:s report a sector size of 512B for "compatibility reasons") while they other filesystems doesn't. Because Bcachefs should default to 4k blocks according to the docs so this really sounds like a bug.
    Last edited by F.Ultra; 02 December 2023, 01:49 AM.

    Leave a comment:


  • zexelon
    replied
    Originally posted by akarypid View Post

    What I would like to see is a comparison between XFS and and ZFS/BTRFS in functional-equivalence scenarios.

    The reason to run ZFS is to exploit mirroring/RAID, ability to extend volumes, etc. Why not do an XFS+LVM+mdraid (not sure how you would set this up) versus ZFS/BTRFS native configuration? That would be very interesting for users considering options beyond ext4.

    I hope Michael likes this article idea and picks it up.
    Michael This would be awesome to see! Including bcachefs (or bca-chefs) if you will! I run a lot of servers on XFS/LVM combo, but have started to experiment on a non-essential system with ZFS. Having a head to head of these 4 systems would be really helpful to some real world sysadmins.

    Leave a comment:


  • andyprough
    replied
    Originally posted by cynic View Post
    bitrotting is not a bug of the filesystem.
    next time, instead of showing off and call other people conspiracy theorist, just use a search engine to at least undertand what you are talking about.
    I don't believe I was showing off, not sure how that would work when I'm asking you a question. You seem to be the one trying to show off some sort of superior knowledge, but my probing question has apparently revealed that you do, in fact, have zero experience with the problem you are scare-mongering about. As expected. Since you are such an expert on 'bitrot and ext4' searches, I'm sure you realize that there are no reports of it actually occurring with ext4. There are quite a few articles questioning whether bitrot is a real phenomenon at all, or just a conspiracy.

    Leave a comment:


  • phoenix_rizzen
    replied
    Just curious why BcacheFS block size is set to 512 bytes when all the other filesystems are set to 4096 bytes? Wouldn't it make sense to set them all to the same block size? And, really, 4096 bytes should be the minimum block size for any filesystem to allow for easier migration to Advanced Format drives going forward (for those still using spinning rust for storage).

    Leave a comment:


  • akarypid
    replied
    Originally posted by zexelon View Post

    I have used XFS for years on both work stations, servers and clusters. I moved over to it from EXT4 specifically for some of its more edge case performance. I would say it is definitely on par with EXT4 from a reliability point and edges it out in performance in quite a few use cases.

    XFS is no where near as advanced as ZFS, btrfs or bcachefs, but it can be built on top of LVM quite nicely to achieve similar (though clunkier than say ZFS) setups.
    What I would like to see is a comparison between XFS and and ZFS/BTRFS in functional-equivalence scenarios.

    The reason to run ZFS is to exploit mirroring/RAID, ability to extend volumes, etc. Why not do an XFS+LVM+mdraid (not sure how you would set this up) versus ZFS/BTRFS native configuration? That would be very interesting for users considering options beyond ext4.

    I hope Michael likes this article idea and picks it up.

    Leave a comment:


  • Hans Bull
    replied
    Would have been interesting to see if there's a performance difference using 4096 block size like the other tested filesystems.

    Leave a comment:


  • waxhead
    replied
    Originally posted by Quackdoc View Post

    I don't think a test on ram is "useless" but I don't think there is too much value in it either aside from "huh that's neat". Not to mention it there is little point in actually optimizing for a ram only situation (there is some don't get me wrong) for the vast majority of filesystems, so they would not be indicative of the "best preformance" anyways.

    I do think test's in isolation are useless because they will never be indicative of what a filesystem will actually be put through, there are thousands upon thousands of potential variables
    Thanks for clarifying. I do agree with you, testing in RAM is not a substitute for testing on real hardware as it is all a orchestrated "dance" between all the software (and hardware) layers and that is what matters in the end. I do still think that resting in RAM will identify (obvious) algorithmic problems, but then again the real answer to what is best is probably each and anything in between, that is a bit too complex tough.

    Leave a comment:


  • ptrwis
    replied
    Only XFS for PostgreSQL

    Leave a comment:

Working...
X