Announcement

Collapse
No announcement yet.

An Initial Benchmark Of Bcachefs vs. Btrfs vs. EXT4 vs. F2FS vs. XFS On Linux 6.11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by avis View Post
    RAID1 is basically useless because there are situations when it can't help, so if you're using it and believing you're safe against HW failures, I've got bad news for you.
    well in 30 years i had so many disk failure that done not any service downtime thanks to raid 1/10 than i cant event understand your statement. from mdadm to btrfs/zfs raid it saved me countless times.

    Comment


    • Originally posted by waxhead View Post

      So yes, btrfs is not perfect at all and I strongly dislike some of the choices that are made. That being said I think BTRFS is the best filesystem out there and nothing else comes close except maybe bcachefs in some years. For me BTRFS have proven to be rock solid and have never failed except once when I tested a non-LST kernel on my desktop which had a know and nasty bug. I was still able to recover all the files I cared about so for me BTRFS has proven to be a reliable filesystem where I can know when something is wrong (and in many cases see the error rate increasing before storage devices fail).
      ZFS is far superior to btrfs in every way aside from doing disk based rather than block based redundancy (however you can argue that ZFS due to being better thought out actually acheives its slated design goals

      Originally posted by waxhead View Post
      If bcachefs is better designed than btrfs is something that remains to be seen. It is claimed to be and It perfectly well might be , but we don't know until it has more field testing which hopefully will come now that it is part of the kernel. What I am slightly concerned about is that I have noticed that when I see something serious with BTRFS then LVM, MD and more often than not bcache (not bcachefs) seems to be nvolved.
      Agreed, my point is it would be highly shocking if bcachefs doesn't turn out better than btrfs once its had time to solve its teething issues.

      Comment


      • Originally posted by avis View Post

        I've never used CoW filesystems but no other "standard/classic" FS can survive [system] crashes. And I'm not entirely sure CoW filesystems are immune to crashes either.
        For a large portion of crashes they are immune, by design CoW filesystems don't override/mutate blocks when data is being written, rather they write data to new fresh blocks which means if a crash happens then the old block data is still there.

        Ontop of this there are multiple redundancy levels, its definitely much harder to corrupt a ZFS filesystem then ext4 (I have managed to break ext4 filesystems many times, usually due to hard crashes).

        Comment


        • Originally posted by mdedetrich View Post

          ZFS is far superior to btrfs in every way aside from doing disk based rather than block based redundancy (however you can argue that ZFS due to being better thought out actually acheives its slated design goals
          I don't think ZFS is superior. BTRFS management interface is simpler and cleaner in many ways and it handles differently sized storage devices better imho. As for fault tolerance ZFS is not fool proof either. There are stories around even for ZFS. No matter what filesystem you use it's not a substitute for backups.

          http://www.dirtcellar.net

          Comment


          • Originally posted by waxhead View Post

            I don't think ZFS is superior. BTRFS management interface is simpler and cleaner in many ways
            I find the opposite but I guess we have different tastes

            Originally posted by waxhead View Post
            and it handles differently sized storage devices better imho.
            Yes this is a result of btrfs being block based, thats what I was saying earlier

            Originally posted by waxhead View Post
            As for fault tolerance ZFS is not fool proof either. There are stories around even for ZFS. No matter what filesystem you use it's not a substitute for backups.
            I have done some crazy shit to try and break zfs and I have yet to get to a point where the filesystem cannot be mounted. In one case it was really corrupted, but due to its design you can roll back the entire filesystem to a specific transaction id and I lost only a few megabytes of data (which I had on me anyways).

            I mean of course I am of course excluding cases where you destroy all of the hard drives at once, but short of that there is very little you can do to break a zfs filesystem.

            Comment


            • Originally posted by avis View Post

              You are the 20th person in this thread to confuse hardware/OS failure with FS failure.

              No, files on ext4 or NTFS or most other FS'es don't get corrupted for no reason. As for whether your FS does checksuming, that's up to it. Personally, I have all my files checksummed manually (md5sum * > md5.sum) because I don't trust any FS.​ And I don't trust RAID either because it's way too expensive for me. RAID1 is basically useless because there are situations when it can't help, so if you're using it and believing you're safe against HW failures, I've got bad news for you.

              It's not the job of a file system to maintain the integrity of the stored data itself. I mean, it would be great to have such an FS, but imagine the overhead of calculating checksums and then storing some extra data (how much exactly?) to recover bit flips or even complete bad sectors.
              So according to you, should I create a virtual machine, a virtual disk in it, a file system on it and do nothing else? So you imagine FS works? After all, FS is directly related to hardware and software.

              He who does nothing, spoils nothing. Sure.​

              Comment

              Working...
              X