Announcement

Collapse
No announcement yet.

Bcachefs Merges Support For Btrfs-Like Snapshots

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • flower
    replied
    Originally posted by F.Ultra View Post

    Painfully slow though if you have somewhat larger drives since it will kick a full reconstruct on every single no clean shutdown.
    no it doesnt. just use bitmap mode and place the crc on an external ssd drive

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by flower View Post

    i use integritysetup in a raid6 mdadm setup atm. this can also detect and repair such failes
    Painfully slow though if you have somewhat larger drives since it will kick a full reconstruct on every single no clean shutdown.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by mdedetrich View Post
    Again, completely different situation and Oracle was just finishing what Sun wanted to do. Remember the case with .net implementing Java?
    Completely different situation, Microsoft added proprietary extensions to their implementation of Java creating incompatible binaries while still marketing it as Java. Google did no such thing with Dalvik.

    Leave a comment:


  • alcalde
    replied
    Originally posted by darkbasic View Post

    btrfs: raid5 is a joke
    This is an urban legend. Btrfs RAID 5 is just fine. Proof:

    https://www.unixsheikh.com/articles/...-mdadm-dm.html

    Leave a comment:


  • Eumaios
    replied
    What this file system really needs--and the developer could provide it in seconds--is strategic capitalization: BCacheFS. I can't be the only one who instinctively scans the name as BCAChefs.

    Leave a comment:


  • kreijack
    replied
    Originally posted by flower View Post
    raidz works well without a ssd or any kind of tiered storage. it doesnt write raidz data twice

    EDIT: it's not even possible to do that. you can only cache sync writes. normal writes never use any kind of caching
    Is it the purpose of ZIL/SLOG ?

    The point is that the RMW cycle of the parity requires two condition:
    1) all the stripe is empty
    2) or the RMW cycle is protected by a LOG

    ZFS, thanks to the variable stripe size, does 1; however in case of small changes there is a fragmentation problem. Anyway changing in the middle of a stripe requires the rewriting of the full stripe.
    So, ZFS is the best incarnation of the RAID5/6/7; however the RMW cycle requires some technical compromises which lead to some performance penalty, which is mitigated by the ZIL/SLOG/L2ARC


    Leave a comment:


  • billyswong
    replied
    Originally posted by S.Pam View Post

    Try disabling write cache. 'hdparm -W0'.
    Thanks for this idea! I will give it a try. Any many people that care data integrity more than speed may also want it set as default too.

    Leave a comment:


  • S.Pam
    replied
    Originally posted by billyswong View Post

    The buggy storage stack problem described in birdie's link happened a lot of times already in my current desktop. Very similar. Probably triggered by write commands of Firefox closing not finished yet when the computer is shutdown. Risk lowered if I remember to wait some time between closing Firefox and shutting down the computer. Rescued everytime by fsck to the ext4 drive.

    The computer is running ASUS B450 motherboard + Curcial SATA SSD. Not sure which one is the culprit, maybe both.
    Try disabling write cache. 'hdparm -W0'.

    Leave a comment:


  • flower
    replied
    Originally posted by kreijack View Post

    ZFS, btrfs and bcachefs are the only filesystem that can detect these kind of problem.

    i use integritysetup in a raid6 mdadm setup atm. this can also detect and repair such failes

    Leave a comment:


  • kreijack
    replied
    Originally posted by billyswong View Post
    I think btrfs fanboys should recognize this: no matter how much more features and better functions they claim, btrfs causes more corrupted, unrecoverable systems than good old ext4. It may be the fault of faulty hardware. It may be btrfs developers assumed some standard behaviour when a computer faces power failure but those "faulty" hardware are doing whatever they like. But the truth is ext4 survives them in a far far higher rate than btrfs. If btrfs is designed to be only safe under enterprise grade hardware system, label it so. Else, adapt to quirks and bugs of consumer hardware in general. Or, if its design can't be fixed without breaking backward compatibility, accept its failure.
    My experience is quite different from what are you wrote. I used BTRFS on a very bad hardware: the power supply was unable to sustain the hard-disk sometimes so the HD blocked.
    I never lost a filesystem. Sometime a file was corrupted, but it was quite easy to detect it thanks to the checksum.

    I can't say if ext4 would have a better outcome, however ext4 is not capable to detect corruption.

    I remember that in the beginning ZFS was considered unreliable because it was able to find corruption in a non enterprise grade hd: the reality is that the disk became bigger, cheaper and less reliable; so the likelihood of a corruption increase to the point that it is no a so remote possibility. The likelihood increases in a non enterprise grade hd.

    ZFS, btrfs and bcachefs are the only filesystem that can detect these kind of problem.


    Leave a comment:

Working...
X