Announcement

Collapse
No announcement yet.

There's Talk Again About Btrfs For Fedora

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by jwilliams View Post
    Are you incapable of rational thought as well as being unable to post anything except nonsense? LOL
    Me: I sometimes dream the sky is red.
    You: "the sky is red." What an idiot, who would say that? That's a valid quote. The rest was blather. btrfs is untrustworthy. zfs rocks.
    Me: /facepalm

    Comment


    • #72
      Originally posted by jwilliams View Post
      Are you incapable of rational thought as well as being unable to post anything except nonsense? LOL
      Me: I sometimes dream the sky is red.
      You: "the sky is red." What an idiot, who would say that? That's a valid quote. The rest was blather. btrfs is untrustworthy. zfs rocks.
      Me: /facepalm









      Last edited by smitty3268; 19 January 2013, 12:49 AM.

      Comment


      • #73
        Originally posted by ryao View Post
        btrfs cannot really compare to this. My understanding of btrfs is somewhat superficial, but lets compare:

        btrfs has no ditto blocks, so if metadata is corrupted, there is a strong possibility that it will be unable to recover.
        It uses data and/or metadata mirroring on other devices/partitions if you choose so. I can't really comment on the other points, since I'm a user and not a developer of the FS. But why do they matter in every-day circumstances, anyway?

        Comment


        • #74
          Originally posted by GreatEmerald View Post
          It uses data and/or metadata mirroring on other devices/partitions if you choose so. I can't really comment on the other points, since I'm a user and not a developer of the FS. But why do they matter in every-day circumstances, anyway?
          Most features people want in btrfs (besides transparent compression, CoW, and snapshots, those are always awesome) are there for enterprise data integrity. The main use case of ZFS is in massive high throughput storage clusters that can't have any data loss ever, while operating in often a dozen or more drives in raid. They depend on atomics, data integrity and duplication, and on the FS itself being steeled against its own metadata getting tainted.

          ~user of btrfs on my main Arch install. Because snapshots are the best system restore ever.

          Comment


          • #75
            Originally posted by mayankleoboy1 View Post
            in most benchies i see on Phoronix, EXT4 runs circles around BTRFS in most of the tests (except when you set compression in BTRFS).
            Dont see why it should be default.

            Most linux noobs probably use ubuntu anyway (i am one too).
            btrfs isn't really just about performance, it's about capabilities. btrfs is hugely different from a simple partition format like ext4; btrfs incorporates volume management and redundancy and all sorts of other features that are usually layered on top of simple formats with tools like LVM and mdraid. The capabilities btrfs brings to the table are really useful for distributions, which is why there's always a desire to make it default, but the tools and performance may well need to catch up before this is plausible.

            Comment


            • #76
              Originally posted by AdamW View Post
              btrfs isn't really just about performance, it's about capabilities. btrfs is hugely different from a simple partition format like ext4; btrfs incorporates volume management and redundancy and all sorts of other features that are usually layered on top of simple formats with tools like LVM and mdraid. The capabilities btrfs brings to the table are really useful for distributions, which is why there's always a desire to make it default, but the tools and performance may well need to catch up before this is plausible.
              Btrfs snapshots alone would really simplify the murky waters that are Linux system restore utilities right now. A decent UI on that, scheduled snapshotting, and easy restore would kick the crap out of other options.

              Tangentially, I just had a reaffirming interaction with btrfs. My main machine lost power, btrfs had a superblock go bad and would segfault on boot, from my recovery disk brfsck --repair fixed it easy. Once that tool becomes the mainline fsck.btrfs it would have recovered no problem. Promising!

              Comment


              • #77
                Originally posted by ryao View Post
                1. btrfs' uses hashes as well, but it has 32-bit hashes on 32-bit userlands and 64-bit hashes on 64-bit userlands. As long as there are no collisions, you can verify the integrity of data, but the probability of a collision is quite high, especially in the 32-bit case.
                2. btrfs has no ditto blocks, so if metadata is corrupted, there is a strong possibility that it will be unable to recover.
                I have been informed by the btrfs developers that these two points are wrong. The checksums are CRC32C on all platforms, which make them weaker than I thought. Also, btrfs does have ditto blocks and uses them by default on everything but what it detects to be a SSD. This is better than not having them at all as I had previously been led to believe.

                Comment


                • #78
                  Originally posted by ryao View Post
                  I have been informed by the btrfs developers that these two points are wrong. The checksums are CRC32C on all platforms, which make them weaker than I thought. Also, btrfs does have ditto blocks and uses them by default on everything but what it detects to be a SSD. This is better than not having them at all as I had previously been led to believe.
                  CRC32C is not bad for data checking. If there is an error there is only 1 in 4 billion chance that BTRFS does not notice. If you get enough disk errors that there is a chance of one getting through that then you have bigger things to worry about. There is no point in requiring a cryptographic level hash, because if someone is about to tamper with the data on your disk they can just as easily tamper with the checksums.

                  A glance at the ECC RAM page on wikipedia seems to say that you get 8bits of checksum for each 64bit word. So by my maths 1 in 256 RAM errors would go undetected.

                  Comment

                  Working...
                  X