Announcement

Collapse
No announcement yet.

Btrfs In Linux 3.10 Gets Skinny Extents, Quota Rebuilds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Or should I use another mode of encryption?

    Comment


    • #12
      Originally posted by ᘜᕟᗃᒟ View Post
      So would a btrfs filesystem with LZO compression and LVM encryption be reliable?
      You wouldn't do LVM -> Encryption -> btrfs -> LZO

      You'd encrypt indivudal partitions with LUKS, like /dev/sda1 and /dev/sdb1, then when you're making the btrfs filesystem you'd do

      mkfs.btrfs /dev/sda1 /dev/sdb1 -L EncryptedRoot

      then turn on LZO compression. Btrfs IS its own volume manager so unless you're continually resizing partitions theres no need to do LVM AND btrfs

      I'm not positive that you can do compression ontop of encryption though, you might be able to, but im not 100% positive.
      All opinions are my own not those of my employer if you know who they are.

      Comment


      • #13
        Originally posted by ᘜᕟᗃᒟ View Post
        So would a btrfs filesystem with LZO compression and LVM encryption be reliable?
        have a read of

        Comment


        • #14
          Hey!

          Maybe the skinny EXTents will fix that fractal fragmentation nightmare ... stupid buffer under-run ...

          But to get off topic entirely, reading the newsgroups on Tux3 ... apparently ... Hirofumi changed the horribly slow itable btree search to a
          simple "allocate the next inode number" counter, and shazam! The slowpoke became a superstar.

          Amazing ... simply amazing ...

          Comment


          • #15
            Originally posted by juxtatux View Post
            Hey!

            Maybe the skinny EXTents will fix that fractal fragmentation nightmare ... stupid buffer under-run ...

            But to get off topic entirely, reading the newsgroups on Tux3 ... apparently ... Hirofumi changed the horribly slow itable btree search to a
            simple "allocate the next inode number" counter, and shazam! The slowpoke became a superstar.

            Amazing ... simply amazing ...
            That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.
            All opinions are my own not those of my employer if you know who they are.

            Comment


            • #16
              Originally posted by Ericg View Post
              That already got reported on Jux, see Michael's article about Tux3 and Dbench. "Allocate the next inode number" is always the fastest way to do it, its just not always the smartest. See the article and the follow up to see why and the explanation.
              Ah, I get it now - you've done that so the front end of tux3 won't encounter any blocking operations and so can offload 100% of operations. It also explains the sync call every 4 seconds to keep tux3 back end writing out to disk so that a) all the offloaded work is done by the sync process and not measured by the benchmark, and b) so the front end doesn't overrun queues and throttle or run out of memory.

              Oh, so nicely contrived. But terribly obvious now that I've found it. You've carefully crafted the benchmark to demonstrate a best case workload for the tux3 architecture, then carefully not measured the overhead of the work tux3 has offloaded, and then not disclosed any of this in the hope that all people will look at is the headline.

              This would make a great case study for a "BenchMarketing For Dummies" book.

              When you're right you're right!

              Comment


              • #17
                You're posting in the wrong thread...

                Comment


                • #18
                  Originally posted by GreatEmerald View Post
                  You're posting in the wrong thread...
                  My bad ... well ... On a btrfs *note* I hope the skinny extents fix the fragmentation issues all the same!

                  Comment


                  • #19
                    Has anyone tried to break BTRFS on purpose and document it?
                    Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

                    I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.

                    Comment


                    • #20
                      Originally posted by brosis View Post
                      Has anyone tried to break BTRFS on purpose and document it?
                      Like hard-reset on power on, on writing, on recovery, on recovery of recovery, etc?

                      I wonder how reliable it is compared to EXT on journal=data, excluding resistance to bitrot.
                      I broke btrfs on accident one time during an install of F18 because I accidentally killed the power to it (Pulled the wrong cable >.>) during an update. I didn't try to fix it though, for all I know a quick 'btrfsck /dev/sda1' would've fixed it, instead since it was a brand new install I just redid it.

                      And I know btrfs can detect even a single bit of corruption, that got mentioned in a review of it that Michael covered-- the company's raid hardware was failing and btrfs noticed one bit was corrupted so far, printed warnings and then the failing hardware was caught.
                      All opinions are my own not those of my employer if you know who they are.

                      Comment

                      Working...
                      X