Announcement

Collapse
No announcement yet.

Btrfs LZO Compression Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by BenderRodriguez View Post
    You may be right.
    What would matter then is the speed of your CPU. Also matters heavily the stuff your compressing.

    On my system I created a *tar file of pdf files. Some of the pdf files are text, but most of them are mainly images. Since they are already compressed quite a bit then you don't benefit from it a whole lot.

    This compresses at about 60-70 MB/s and decompresses at close to 400MB/s. The size saved is small, however. I am only saving 10-20 MB in 246MB. So using compression on something like that is not worth it.

    Meanwhile I used dd it create a 573MB file full of zeros. That compresses down to just over 2.3MB and takes 3/4 of a second.

    On the other end of a spectrum a 573MB file made using '/dev/urandom' takes almost 10 seconds to compress and is actually slightly larger afterwards.

    So even on very fast SSDs it MAY be worth it. If you care more about read speeds it may help. If you care about random access it may hurt.

    It also heavily depends on how smart the system is about using compression.


    It's stupid to compress jpg files or gif files or any sort of common multimedia files. They are already compressed heavily using specialized algorithms and it's extremely unlikely that lzo is going to help things any. But text files, documentation, program files, and many other can benefit from the compression. Some databases may benefit also, but you would think that if they did they would already be using compression internally.

    Comment


    • #12
      Sandforce SSD

      I'd be really interested in seeing how this works on a SSD using a Sandforce controller. Sandforce has the fastest controllers because the drive itself is compressing the data. Filesystem compression may actually hurt performance on these fast SSDs.

      Comment


      • #13
        and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.

        using a ssd for / with /var, /tmp, /boot on different partitions. with reiser4 I was able to store 5gb more on a 80% full 64gb disk compared to ext4.

        Comment


        • #14
          Originally posted by energyman View Post
          and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.
          btrfs does the same thing -- it's the difference between `compress` and `compress-force` mount options.

          i am using LZO comp on a s101 netbook and m4300 notebook with spectacular results. you also have to remember that btrfs only compresses existing data when the data is modified, and even then it only compresses the new extent ... to compress an existing disk completely you need to mount with a compression option, then initiate a rebalance (can be done online).

          it is too bad about reiser4 ... i never used it myself but i've always read very good things about it; it's unfortunate Hans was so difficult to work with and ... well ... other things too. alas, it has no vendor to back it (to get into mainline) -- btrfs is the future here.

          C Anthony

          Comment


          • #15
            It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

            Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.

            Comment


            • #16
              Originally posted by mbouchar View Post
              It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

              Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
              it's a pretty well established idea that on-disk compression can (and does) lead to impressive performance increase under many workloads. it's not a simple "yay" or "nay". the fact is in the time your disk seeks once your CPU has already burned thru several million cycles ... it's like light speed vs. the fastest human vehicle -- anything you can shave off the latter is probably a win, even if it already seems "pretty fast".

              there are even several workloads that benefit from _memory_ compression ... because RAM -- the uber spaceship of 2010+ -- is still peanuts compared to C. everything that isn't your CPU is a cache to your CPU; the less time to get it there the better. data locality is king.



              "Zcache doubles RAM efficiency while providing a significant performance boost on many workloads."

              both zcache and btrfs (not sure ZFS) use LZO ... the simple truth is your CPU is a lazy bastard that spends most of it's time blaming it's poor efficiency on the rest of the team ;-)

              C Anthony

              Comment


              • #17
                I use btrfs + lzo compression on the latest linux images for the O2 Joggler.

                Coding, retro-gaming, and other projects



                using slow usb flash devices (mine is ~9mb write and ~27mb/second read),. btrfs with lzo feels significantly faster than using zlib (and less cpu usage). Not an actual benchmark of course

                Comment


                • #18
                  Me too, I think this testing should run on some video file to see the real benefit of compression.

                  I don;t think 9X iozone testing result can apply to real world.

                  Originally posted by BenderRodriguez View Post
                  Why do i get the feeling that zlib/lzo mode speeds up iozone and fs-mark only because the created files are empty and thus compress almost infintely good ?

                  Comment


                  • #19
                    Originally posted by mbouchar View Post
                    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

                    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
                    only that ram is dirt cheap and cpu's underworked almost all the time.

                    Comment


                    • #20
                      It only needs fsck now!

                      Comment

                      Working...
                      X