Announcement

Collapse
No announcement yet.

SUSE Remains Committed To The Btrfs File-System

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Michael_S View Post
    What does ZFS have that btrfs does not?

    I have btrfs on half a dozen drives, and I've been using it that way for years without any trouble.
    To which I say...
    What does btrfs have that my coffee machine does not?

    I have a coffee machine, made countless cups of coffee with it, and I've been using it that way for years without any trouble.
    Just where is the logic?! If not having a problem with something is your sole reason for using btrfs then you're having a problem.

    Comment


    • #42
      Originally posted by jrch2k8 View Post
      LVM2 don't do the same functionality of a RAID(LVM2+MD+RAID does kinda do tho) to start with, In general you want raid for either fault tolerance(mirror data in another disk(s)) or data protection(parity calculations)

      Second we need RAID and checksums in filesystems precisely for efficiency, LVM2 is too far away and generic to operate efficiently on the FS while FS knows exactly how to do things with as little penalty as possible and in modern hardware can even do certain optimizations(since it controls the drive directly) that LVM2 was not designed to do in the first place(think of stuff like nvme 1.2+ or even beyond nvme 1.3 virtualization)
      Why is btrfs then often among the slower filesystems in benchmarks? It always appeared to me that in trying to do everything in one it ends up not being bad at everything, but being mediocre in everything. Its a filesystem one would only ever try to get away from, find more specialised or more advanced file systems rather than one that tries to do it all.

      Comment


      • #43
        Originally posted by sdack View Post
        Why is btrfs then often among the slower filesystems in benchmarks? It always appeared to me that in trying to do everything in one it ends up not being bad at everything, but being mediocre in everything. Its a filesystem one would only ever try to get away from, find more specialised or more advanced file systems rather than one that tries to do it all.
        Theres no such a thing as one fits all for file systems. It's slower because it has more features. It's up to the system administrator to decide where he needs cow, checksum error detection, transparent compression, etc..

        Comment


        • #44
          Originally posted by sdack View Post
          Why is btrfs then often among the slower filesystems in benchmarks? It always appeared to me that in trying to do everything in one it ends up not being bad at everything, but being mediocre in everything. Its a filesystem one would only ever try to get away from, find more specialised or more advanced file systems rather than one that tries to do it all.
          There is a bit of a "concept" problem here, let see if I can help you get a better idea of the situation.

          Let start defining two concepts or markets for a File system.

          1.) Home users FS are defined as "please moar speed lol", aka most regular home user care about speed related parts of the FS, you want your apps to load fast, your games to load fast, etc. with a medium level of thrust the FS will keep running safe(ish).

          2.) Enterprise users FS are defined "Every bit must be SAFEEEEEEEEEEE", aka on business speed is an nice extra feature but in no way is a requirement because the only absolute requirement is data preservation(see ***) then everything else.

          Now lets define benchmarking:

          1.) Write/Read speeds, as phoronix and other reviewers do and as implied the best FS is the one that read/seek and write the fastest, the problem here is that non of the existing benchmarks tell you about data integrity and most benchmarks don't even care if half the data on the receiving side of the benchmark is corrupted as long as is the fastest ever and most non storage educated users are used to those bars from others benchmarks like FPS on games, etc.right? so it make sense in a very bad way because it generates a horribly wrong concept of what FS should do.

          2.) Data Integrity, literally 0 usable benchmarking tools but lots of "papers" studies by huge enterprises on a variety of scenarios with extremely focused methodologies to verify under extreme scenarios the data integrity through the years(been years the keyword here), see the problem now?, it is simply impossible to measure data integrity in any trustworthy way fast enough to show in those cute graphs, so how a user who is not into enterprise storage make a decision? see a benchmark and look a the fastest or spend endless hours on google trying to find good information and learning to discard many irresponsible people blogs telling ECC is not important or that NTFS is just fine, etc.

          *** Why data integrity is a problem, just buy better HDDs, why should I care? You see here is where a huge chunk of the problem resides, most people tend to believe RAM and hard drives are perfect mediums, aka if you put a bit on them and check 10 years later the same bit will be there in the same exact condition and all this "enterprise" "ECC" labels are just to charge you more money or a placebo effect kinda thing.

          In reality RAMs under certain conditions can flip bits in real time, for example, if your GPU turbo boost and your PSU can cope fast enough your RAM may receive a tiny bit less power and for a micro second and fail to read some sectors giving corrupted data back to your game/apps or your RAM can have hardware defects and simply corrupt data in certain sections of the stick, etc. etc. and the OS have no way to know about this or warn you in any way.

          Hard drives present a similar issue known as "bit rotting", regardless of the type of HDD(Mechanical, SSD, M.2, etc.) every I/O operation have a small mathematical chance of flipping a stored bit somewhere on the drive and the chance depends heavily of the quality of the manufacturing processes and the internal software inside the drive(firmware) but it won't ever be 0

          But I never noticed any of this? lol I played games for 10 years and never see this!!!, I use office since Win95 this is bullshit!! but it is? Most PC users suffer from this on a daily basis is just not important not enough for most users to care or realize why it happened, what you mean? have you ever seen a weird pixel in your game flash in 1 frame? some of your game saves sometimes fail to load but the next is just fine? your excel file have a weird symbol in a cell that you think is maybe because you pressed something weird in your kb 6 months ago? sometime some old game throw an exception for no reason but is fixed after reinstall?. As you can see it could be annoying for a home user but is nothing of life and death or that will bother anyone enough to pay for extra protection, right? but lets see the other side of the spectrum, can you see how a bunch of flipped bit can be of huge consequences from loss of thrust to massive law suits or even deaths for several industries/bussiness? a small bit error can crash that nice Muti million dollars space probe, can generate panic in stock market, can take down a bridge somewhere, can modify your bank account balance, etc. etc. etc. now seems important enough, right?

          This is why (non-)ECC and NTFS/FAT32/XFS/EXT4/F2FS/HFS/AppleFS, etc. and ZFS/BTRFS filesystems exists, in the first blocks we have the regular home users FS that have a varying amount error checking as long as it does not affect speeds(for example, by error checking capabilities XFS > EXT4 > F2FS > HFS+ > NTFS > FAT32) but in the second blocks we have the "Self Healing" able file systems(ZFS > BTRFS ..... way way down XFS > ...), the main priority on this FS is checksum the data written and readed to make sure no bit has been flipped or "rotted" over time and in case it has changed find somewhere else a good copy and fix it(RAIDs enter in the picture now, the more copies of the same data the bigger the chance to be able to fix errors). The next priority of the "Self Healing" FS is features interesting for storage like compression, deduplication, encryption, volumes, snapshots, ACL etc. etc. then the last priority is speed.

          Now that we have a decent understanding of the WHY each exits, lets discuss BTRFS issues, right? for the job it was created BTRFS is great already but compared to ZFS(which was already enterprise grade standard 10 years ago) still have several shortcomings, let see them now:

          1.) Self healing: both ZFS and BTRFS are awesome, every bit coming out of either you can trust with your life
          2.) RAID support: ZFS wins hand down but BTRFS is also great at RAID1 and 0 but 5/6/50/60 are experimental or under development(<-- this thing take time and years of testing) so it fails a bit short here
          3.) Features: I would say again ZFS hands down but BTRFS is 80% close mostly because deduplication is experimental and lack read optimizations in certain scenarios like virtualization
          4.) Enterprise hardware compatibility: ZFS again hands down but BTRFS prolly is 70% close here mostly missing hot swap support
          5.) Speed: ZFS wins again simply because has more time on the market but neither will ever be as fast as XFS or EXT4.
          6.) most people discarding either are simply misjudging the results due to ignorance on the subject matter, the problem some of us had with BTRFS is not related to speeds or loosing benches again EXT4 but actual development speed of the missing features(hot swap support is pretty damn important for storage for example as RAID 60 is) and actual consideration of division of men power since ZFS and BTRFS are really really similar in basically every aspect(not all tho).

          A regular user should care about BTRFS or ZFS? yes, if you consider your data safety takes priority over speed. No, if you need speed over data safety.

          ZFS or BTRFS remove the need for backups? Big Fat hell NO. What we mean by data protection is as actual RUNTIME data protection while backups are COLD or NON RUNTIME data protection. so regardless of the FS or OS you use, make freaking backups and take every measure you consider necessary to keep it safe.

          I hope it helps clear the situation with FS and ECC for anyone interested

          Comment


          • #45
            Originally posted by starshipeleven View Post
            This is kinda tangential, also btrfs is an "anti-hw-raid" platform for that matter.
            Very true, won't disagree with that (I just like putting the ZFS fan-bois in their place from time to time).

            Comment


            • #46
              Originally posted by jrch2k8 View Post
              mmm, i heard of this but didn't think was accurate. would be interesting if you could check systemd on ext4 or xfs in your machine
              I'm not able to try out that again, since I'm already in a process swithing away from systemd. ;\ However, I managed it to zombify by setting Conflicts=network.target or maybe Conflicts=NetworkManager.target in my pre-hibernate.service. The idea was that before the machine went to hibernate network was to put down and then backup processes would run. I'm not sure If I have that service file anywhere anymore. :|

              Maybe back to topic now...

              Comment


              • #47
                Originally posted by profoundWHALE View Post

                Now, you were right in saying that ZFS is not perfect for every use case. It, like every other filesystem, has its pros and cons.

                But why did you have to leave the troll bait? Just seems really unnecessary.
                You need to hang around here for awhile. There are few sacred cows that you aren't allowed to touch.

                Comment


                • #48
                  Originally posted by jrch2k8 View Post

                  There is a bit of a "concept" problem here, let see if I can help you get a better idea of the situation.

                  Let start defining two concepts or markets for a File system.

                  1.) Home users FS are defined as "please moar speed lol", aka most regular home user care about speed related parts of the FS, you want your apps to load fast, your games to load fast, etc. with a medium level of thrust the FS will keep running safe(ish).

                  2.) Enterprise users FS are defined "Every bit must be SAFEEEEEEEEEEE", aka on business speed is an nice extra feature but in no way is a requirement because the only absolute requirement is data preservation(see ***) then everything else.
                  No, that's not how it's defined, only perhaps how you define it. *lol*

                  Comment


                  • #49
                    Originally posted by sdack View Post
                    No, that's not how it's defined, only perhaps how you define it. *lol*
                    some humor for that wall of text

                    Comment


                    • #50
                      Originally posted by sdack View Post
                      No, that's not how it's defined, only perhaps how you define it. *lol*
                      His is a simplification, but it's roughly like that.

                      Of course there are various levels of safe, but also MS has their own next-gen fs with checksumming and LVM and stuff and it runs worse than plain NTFS (or FAT32 for that matter).

                      Comment

                      Working...
                      X