Announcement

Collapse
No announcement yet.

Using Disk Compression With Btrfs To Enhance Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Using Disk Compression With Btrfs To Enhance Performance

    Phoronix: Using Disk Compression With Btrfs To Enhance Performance

    Earlier this month we delivered benchmarks comparing the ZFS, EXT4, and Btrfs file-systems from both solid-state drives and hard drives. The EXT4 file-system was the clear winner in terms of the overall disk performance while Btrfs came in second followed by Sun's ZFS in FreeBSD 8.2. It was a surprise that in our most recent testing the EXT4 file-system turned around and did better than the next-generation Btrfs file-system, but it turns out that Btrfs regressed hard in Linux 2.6.35 as to be found in Ubuntu 10.10 and other soon-to-be-released distributions. However, regardless of where Btrfs is performing, its speed can be boosted by enabling its transparent zlib compression support.

    http://www.phoronix.com/vr.php?view=15233

  • #2
    "Using Disk Compression With Btrfs To Enhance Performance"? But in the article itself you then claim there's no performance gain?

    Explain?

    Comment


    • #3
      Maybe it was sarcasm?

      Comment


      • #4
        Originally posted by nanonyme View Post
        Maybe it was sarcasm?
        Yes, it was really great sarcasm.

        Comment


        • #5
          Oh. It was very subtle then

          Comment


          • #6
            What effect did compression have on the CPU usage?
            The following benchmark with ZFS' compression algorithms, states that gzip compression was very CPU bound in regards to performance, compared to lzjb compression: http://don.blogs.smugmug.com/2008/10...ession-update/

            I wonder why they've gone for gzip compression, instead of something lighter such as lzjb, when tests on ZFS show such difference when comparing the performance against the space saving.

            Comment


            • #7
              Originally posted by RealNC View Post
              Oh. It was very subtle then
              Not as subtle as mine.

              Comment


              • #8
                "Using Disk Compression With Btrfs To Enhance Performance"

                and ext4 wins most of the benchmarks... Pwned!

                Comment


                • #9
                  They should have used lzo instead of gzip.

                  Anyway, the important thing, *again*, what does the test data of the benchmark programs look like? If it's only zeros, that's not a very fair or realistic benchmark, it'll be skewed in favor of compressed filesystems.

                  See the iozone benchmark, for example.

                  Comment


                  • #10
                    Pointless

                    The Vertex 2's SandForce controller implements compression at the hardware level. Activating software compression will of course decrease performance.

                    Comment


                    • #11
                      just a thought...

                      Why is there such concern over which kernel has a small performance regression in it(iin a not for production use FS no less)? Can you not upgrade kernels in ubuntu/fedora/suse/etc? Does a vanilla kernel not work? If 2.6.35 is bad for the default FS of ubuntu/$DISTRO surely they would ship 2.6.34 or some other version? If you don't like changes, run one of the long term kernels, take your pick of older kernels listed as stable on http://www.kernel.org/ 2.6.34.x, 2.6.33.x, 2.6.27.x All of these receive backported fixes for bugs and security issues.

                      I'm sure i'm missing something as i switched to Gentoo some 7 years ago, as I grumpy at not being able to use a vanilla kernel with some DRM patches on redhat(it was redhat then) and suse. It sure would be nice if someone made a "make config" option for the kernel, but gentoo has genkernel and it tends to work. Do ubuntu/fedora kernels have config.gz support turned on? if so it should be very easy to rebuild a kernel. Although I'm guessing that ubuntu/etc use initramfs-es these days, making it a bit harder to make your own kernel. Is there a reason to always use the provided Ubuntu kernel? or is it imposable to use a non ubuntu packaged kernel?

                      Really though, I'm curious why it's always "THE SKY IS FALLING" sort of news related to some version/check-in of the kernel as it relates to ext4 or btrfs. Don't get me wrong, I like to see people testing new code, and if i had more time/hardware I would be as well.

                      Comment


                      • #12
                        C'mon Phoronix, you can do better. First of all, do these write benchmarks write random data or zeroes? Is the hardware doing compression on it's on anyway? And then you use a kernel in which you know that btrfs has regressed, and try to compare it to a filesystem which is maturing.

                        Try testing a wide range of kernels using btrfs and btrfs -o compress. Use normal hardware, that is, a HDD, and an SSD that does not use compression internally.

                        Comment


                        • #13
                          I agree. Don't test only Ubuntu in its default configuration.

                          Try Archlinux with only openbox also.

                          Try different kernels with different i/o and cpu schedulers like this:
                          http://pf-kernel.org.ua/
                          (There's a good PKGBUILD for Archlinux)

                          Comment


                          • #14
                            I think that the benchmark gets a bit wrong conclusion: the compressed bzip will need to use some CPU to get the IO maxed out. So, in applications that are IO bound, with little CPU usage, like unpacking the Linux kernel, for big applications startup (that uses much less CPU than for example to compress it), it will get naturally better speed. For benchmarks that are both CPU and IO bound, the CPU starving will affect IO speed. Also the rotating media, which is slower on average than SSD will also have a much bigger unbalance between IO reads. At the end: the data that is compressed more will likely benefit from being extracted: startup of a Linux desktop that is bound of reading a lot of config files that are mostly not that CPU bound. So even the benchmark does not show that, probably using a desktop configuration and evaluating the GNOME session startup time may get some startup time improvement, when running a compiling benchmark (which most users don't do) will not be that favorable

                            Comment


                            • #15
                              I'm having trouble accepting that Dbench 4.0 12 client results for ext4. The max read and write speeds of that drive should be somewhere roughly in the ballpark of 280 MB/s. The benchmark result chart is claiming around 960 MB/s for ext4, which is far beyond what the hardware is actually capable of. Is this test showing some kind of problem with the in-memory caching (dcache) of btrfs vs ext4? Is it a bug in Dbench causing crazy results for ext4? The previous reviews I looked up for ext4 vs btrfs are all using Dbench with 1 client, not 12, so I can't easily compare to see how the new results measure up against the old or easily figure out what caused seemingly impossible results.

                              Comment

                              Working...
                              X