Announcement

Collapse
No announcement yet.

Btrfs LZO Compression Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs LZO Compression Performance

    Phoronix: Btrfs LZO Compression Performance

    While the performance of the Btrfs file-system with its default mount options didn't change much with the just-released Linux 2.6.38 kernel as shown by our large HDD and SSD file-system comparison, this new kernel does bring LZO file-system compression support to Btrfs. This Oracle-sponsored file-system has supported Gzip compression for months as a means to boost performance and preserve disk space, but now there's support for using LZO compression. In this article we are looking at the Btrfs performance with its default options and then when using the transparent Zlib and LZO compression.

    http://www.phoronix.com/vr.php?view=15809

  • #2
    This bench cries to be compared to reiser4 compression abilities If you could do that it would be awesome...

    P.S. At the time of writing there is no 2.6.38 reiser4 patch yet, but it should follow shortly

    Comment


    • #3
      Why do i get the feeling that zlib/lzo mode speeds up iozone and fs-mark only because the created files are empty and thus compress almost infintely good ?

      Comment


      • #4
        Originally posted by Kirurgs View Post
        This bench cries to be compared to reiser4 compression abilities If you could do that it would be awesome...

        P.S. At the time of writing there is no 2.6.38 reiser4 patch yet, but it should follow shortly
        Indeed, more so considering reiser4 has had LZO compression support for many years already.

        Comment


        • #5
          I know seeing these benchmarks takes me back to my reiser4 days

          I guess it really was years ahead if it's time, it's a shame that the same level or development wasn't maintained after Hans arrest

          Comment


          • #6
            Originally posted by BenderRodriguez View Post
            Why do i get the feeling that zlib/lzo mode speeds up iozone and fs-mark only because the created files are empty and thus compress almost infintely good ?
            I get that impression, too. This makes those tests completely irrelevant.

            Comment


            • #7
              lzo + ssd?

              I wonder if compression helps or hurts performance on ssd? Access is already fast.

              Comment


              • #8
                Depends on what type of files you got and how many of them. Binary/video/audio/pdf/other already compressed files don't compress too well so LZO won't help much, it will help only with those that compress well. Also as the files will take a little bit less of space it will help and lower the writes number on SSD

                The threaded writes are slower probably because it is CPU bound, without zlib/lzo the cpu only had to process (or not at all, DMA) the write while with zlib/lzo it has to compress them.

                Comment


                • #9
                  Originally posted by BenderRodriguez View Post
                  Depends on what type of files you got and how many of them. Binary/video/audio/pdf/other already compressed files don't compress too well so LZO won't help much, it will help only with those that compress well. Also as the files will take a little bit less of space it will help and lower the writes number on SSD

                  The threaded writes are slower probably because it is CPU bound, without zlib/lzo the cpu only had to process (or not at all, DMA) the write while with zlib/lzo it has to compress them.
                  I was thinking that for slow mechanical disk, lzo helps performance because compress/decompress is faster than read/write extra blocks to/from disk. But SSD maybe not.

                  Comment


                  • #10
                    Originally posted by nbecker View Post
                    I was thinking that for slow mechanical disk, lzo helps performance because compress/decompress is faster than read/write extra blocks to/from disk. But SSD maybe not.
                    You may be right.

                    Comment


                    • #11
                      Originally posted by BenderRodriguez View Post
                      You may be right.
                      What would matter then is the speed of your CPU. Also matters heavily the stuff your compressing.

                      On my system I created a *tar file of pdf files. Some of the pdf files are text, but most of them are mainly images. Since they are already compressed quite a bit then you don't benefit from it a whole lot.

                      This compresses at about 60-70 MB/s and decompresses at close to 400MB/s. The size saved is small, however. I am only saving 10-20 MB in 246MB. So using compression on something like that is not worth it.

                      Meanwhile I used dd it create a 573MB file full of zeros. That compresses down to just over 2.3MB and takes 3/4 of a second.

                      On the other end of a spectrum a 573MB file made using '/dev/urandom' takes almost 10 seconds to compress and is actually slightly larger afterwards.

                      So even on very fast SSDs it MAY be worth it. If you care more about read speeds it may help. If you care about random access it may hurt.

                      It also heavily depends on how smart the system is about using compression.


                      It's stupid to compress jpg files or gif files or any sort of common multimedia files. They are already compressed heavily using specialized algorithms and it's extremely unlikely that lzo is going to help things any. But text files, documentation, program files, and many other can benefit from the compression. Some databases may benefit also, but you would think that if they did they would already be using compression internally.

                      Comment


                      • #12
                        Sandforce SSD

                        I'd be really interested in seeing how this works on a SSD using a Sandforce controller. Sandforce has the fastest controllers because the drive itself is compressing the data. Filesystem compression may actually hurt performance on these fast SSDs.

                        Comment


                        • #13
                          and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.

                          using a ssd for / with /var, /tmp, /boot on different partitions. with reiser4 I was able to store 5gb more on a 80% full 64gb disk compared to ext4.

                          Comment


                          • #14
                            Originally posted by energyman View Post
                            and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.
                            btrfs does the same thing -- it's the difference between `compress` and `compress-force` mount options.

                            i am using LZO comp on a s101 netbook and m4300 notebook with spectacular results. you also have to remember that btrfs only compresses existing data when the data is modified, and even then it only compresses the new extent ... to compress an existing disk completely you need to mount with a compression option, then initiate a rebalance (can be done online).

                            it is too bad about reiser4 ... i never used it myself but i've always read very good things about it; it's unfortunate Hans was so difficult to work with and ... well ... other things too. alas, it has no vendor to back it (to get into mainline) -- btrfs is the future here.

                            C Anthony

                            Comment


                            • #15
                              It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

                              Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.

                              Comment

                              Working...
                              X