Announcement

Collapse
No announcement yet.

Btrfs In Linux 3.10 Gets Skinny Extents, Quota Rebuilds

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Ericg View Post
    That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.
    Ah, I get it now - you've done that so the front end of tux3 won't encounter any blocking operations and so can offload 100% of operations. It also explains the sync call every 4 seconds to keep tux3 back end writing out to disk so that a) all the offloaded work is done by the sync process and not measured by the benchmark, and b) so the front end doesn't overrun queues and throttle or run out of memory.

    Oh, so nicely contrived. But terribly obvious now that I've found it. You've carefully crafted the benchmark to demonstrate a best case workload for the tux3 architecture, then carefully not measured the overhead of the work tux3 has offloaded, and then not disclosed any of this in the hope that all people will look at is the headline.

    This would make a great case study for a "BenchMarketing For Dummies" book.

    When you're right you're right!

    Comment


    • #17
      You're posting in the wrong thread...

      Comment


      • #18
        Originally posted by GreatEmerald View Post
        You're posting in the wrong thread...
        My bad ... well ... On a btrfs *note* I hope the skinny extents fix the fragmentation issues all the same!

        Comment


        • #19
          Has anyone tried to break BTRFS on purpose and document it?
          Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

          I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.

          Comment


          • #20
            Originally posted by brosis View Post
            Has anyone tried to break BTRFS on purpose and document it?
            Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

            I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.
            I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.

            And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.

            Comment


            • #21
              Originally posted by Ericg View Post
              I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.

              And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.
              Thanks for response! But I meant a more serious, analytic breaking of stuff to prove wither its more (or less) reliable than existing solutions like EXT.

              Bitrot ofc gives BTRFS bonus points, no questions.

              Comment


              • #22
                Originally posted by brosis View Post
                Thanks for response! But I meant a more serious, analytic breaking of stuff to prove wither its more (or less) reliable than existing solutions like EXT.

                Bitrot ofc gives BTRFS bonus points, no questions.
                As far as purposefully breaking it... i haven't seen anyone tried with recent kernels. Keep in mind, up until like 2 or so releases ago, btrfs had some outstanding corruption bugs so everyone complained, they got fixed, but I dont think anyone has done real tests since then. You'll probably hear about a few coming soon since SUSE has marked it as Production Ready, and im sure RHEL 7 will include it as an option.

                Comment


                • #23
                  Originally posted by Ericg View Post
                  I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.

                  And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.
                  Just like that other indestructible b-tree hive, windows registry, BIT for BIT... completely reproducible on dd copies!

                  Comment

                  Working...
                  X