Announcement

Collapse
No announcement yet.

Bcachefs Lands More Fixes Ahead Of Linux 6.7 Stable Debut

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by varikonniemi View Post

    Indeed, fix them before they eat someone's data. Unlike ZFS that was eating data for decades before someone noticed
    The bug from last month was very difficult to trigger and requires very specific conditions (it only causes programs to read zeroes when reading back recent in-flight writes), and was only very recently found when aggravated by other changes in both ZFS and coreutils. It's a strong testament to ZFS' reliability that we're down to such hard-to-trigger bugs, with this possibly being the first ever to risk actual data corruption.

    It's now been patched, and a whole class of testcases added to the ztest testsuite to catch issues like it in the future. When was the last time someone subjected BTRFS to a comprehensive battery of torture tests, much less hooked it up to CI?

    That might well have been the first time a bug in ZFS could lead to data corruption in it's entire history. It seems every few months we see another round of critical BTRFS fixes grace the pages of phoronix. It was only last cycle that another rash of them landed: https://www.phoronix.com/news/Btrfs-Linux-6.6 BcacheFS gets a pass for being so new but there's little that can be said in defence of BTRFS's track record.

    There's no hope left for BTRFS but I might expect BcacheFS to eventually supersede ZFS.
    Last edited by Developer12; 23 December 2023, 10:38 PM.

    Leave a comment:


  • varikonniemi
    replied
    Originally posted by cynic View Post

    really? the statement "the cow filesystem that won't eat your data" isn't a claim of being perfect?



    damn, you got me!
    I have my pockets full of btrfs stocks and ZFS bonds!



    look, bcachefs is 0% battlefield tested: a few users are not a statistically significant sample.
    it will have bug and it will eat data.

    if it were easy to develop a 100% correct filesystem, there wouldn't have been horror stories for every filesystem known on earth.



    yawn
    No, it says it won't eat you data. Being perfect means it also has no other kinds of problems. And also all theoretically possible features.

    Let me and phoronix know when the first instance of data loss has happened on a release kernel? I bet you will be monitoring this like a hawk.
    Last edited by varikonniemi; 23 December 2023, 07:56 AM.

    Leave a comment:


  • direc85
    replied
    Oof. Corrupt on read is indeed a terrible bug to have. Not using noatime might explain it, but anyway, please report the bug.

    Now that bcachefs is in kernel, it's way easier to give it a spin, so that's what people are doing. Bugs are found, which is good! I've been looking for an excuse to re-install my play-notebook again, and bcachefs just could be the one. My use cases will be quite mundane (web, light-medium gaming, a little music and videos) with some source code compilation thrown into the mix, so that should be a somewhat nice and not-too-untypical load to throw at bcachefs and see what happens!

    Leave a comment:


  • cynic
    replied
    Originally posted by varikonniemi View Post
    Ah, so no-one said it was already perfect. You just troll.
    really? the statement "the cow filesystem that won't eat your data" isn't a claim of being perfect?

    Originally posted by varikonniemi View Post
    Can i guess why? You are really invested in BTRFS or ZFS and are sweating bullets.
    damn, you got me!
    I have my pockets full of btrfs stocks and ZFS bonds!

    Originally posted by varikonniemi View Post
    From what i have seen it has been used for years OOT and i cannot find a report of it eating all the data. Sure, several bugs, but these are of the kind that pushing a patch or two into fs, bcachefstools or fsck brings things back to normal. Only erasure coding has resulted in data loss, from what i have seen, and it is not enabled by default.
    look, bcachefs is 0% battlefield tested: a few users are not a statistically significant sample.
    it will have bug and it will eat data.

    if it were easy to develop a 100% correct filesystem, there wouldn't have been horror stories for every filesystem known on earth.

    Originally posted by varikonniemi View Post
    Truth is that this is according to my understanding a completely new approach to filesystems that is surprisingly elegant judging by the features vs. lines of code to make it happen. It makes it really promising.
    yawn

    Leave a comment:


  • varikonniemi
    replied
    Originally posted by cynic View Post

    just go to bcachefs.org.

    first thing you read is "The COW filesystem for Linux that won't eat your data".

    also, just read some comments on phoronix talking about how much Kent is a good programmer (while all the other are, obviously inferior), how much Kent payed attention to a rigorous development (while other fs are randomly developed), that the fs is so solid that should have been included in linux 0.1 release and so on.

    There's such an hype around the readiness and robustness of bcachefs that is going to make a lot of damage.
    People should have learned something from the btrfs error of being declared stable too soon.
    Ah, so no-one said it was already perfect. You just troll. Can i guess why? You are really invested in BTRFS or ZFS and are sweating bullets.

    From what i have seen it has been used for years OOT and i cannot find a report of it eating all the data. Sure, several bugs, but these are of the kind that pushing a patch or two into fs, bcachefstools or fsck brings things back to normal. Only erasure coding has resulted in data loss, from what i have seen, and it is not enabled by default.

    Truth is that this is according to my understanding a completely new approach to filesystems that is surprisingly elegant judging by the features vs. lines of code to make it happen. It makes it really promising.
    Last edited by varikonniemi; 22 December 2023, 11:28 AM.

    Leave a comment:


  • Raka555
    replied
    Originally posted by cynic View Post


    People should have learned something from the btrfs error of being declared stable too soon.
    I got an Unraid setup and their default is use btrfs on the cache drive. I always stick to ext4 because it is the only fs that never gave me problems or hassles.
    I went with btrfs because it was the default. A month or so ago the Unraid array would not start after a reboot. After wasting time scratching around for clues, I got a hint that the btrfs cache drive was corrupt. I did not have redundancy for the cache drive, so I lost everything. Luckily it was mostly backups and docker images.
    The cache drive is now xfs, they do not have an ext4 option that I could see. So far, so good ...

    So even now, I won't recommend btrfs for anything but /tmp ...
    I am a once bitten twice shy kind of person.
    Last edited by Raka555; 22 December 2023, 06:03 AM.

    Leave a comment:


  • Raka555
    replied
    Originally posted by PuckPoltergeist View Post

    Depends on the filesystem. With jfs I would expect something in /lost+found after fsck. I'm not aware of zeroed files after log-replay. XFS had this behavior long time ago, because of delayed allocations. But this happened with file writes.
    Yes, but XFS did truncate some files to 0 even if they were used only for reading. Many people lost data when they tried to load the nvidia driver and it locked the whole system.

    I believe it is because the inode gets altered for the access date. I always mount with noatime because if saves some writes and avoids things like this.

    Leave a comment:


  • cynic
    replied
    Originally posted by Raka555 View Post

    When it is slow and bloated and/or crash / eat data
    mmm... ok! I agree!

    Leave a comment:


  • Raka555
    replied
    Originally posted by cynic View Post

    i'm not a hater (how does people hate software?)
    When it is slow and bloated and/or crash / eat data

    Leave a comment:


  • guzz46
    replied
    Ok, so it turns out it might have been because it was a recently copied video file that wasn't synced, so I reinstalled again on bcachefs and tried again, I copied the video file and ran the sync command, then did the test again, this time the video file was fine, so was the text file, I did the test 5 times and didn't encounter any issues.

    File transfer speed seems comparable to f2fs and xfs, boot time may be about half a second to a second slower on nixos, ram usage is almost twice as much, on xfs I use 600M of ram on login, bcachefs uses between 1G and 1.2G of ram, it seems to use more disk space too, on f2fs df -h states I've used 103G out of 901G, where df -h on bcachefs states that I've used 103G out of 825G
    Last edited by guzz46; 22 December 2023, 05:34 AM.

    Leave a comment:

Working...
X