No announcement yet.

Using Disk Compression With Btrfs To Enhance Performance

  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by nanonyme View Post
    Fedora prefers dracut for initrd generation, I think. Syntax is nearly identical except it requires an option -f to force replace in case initrd already exists.
    As I said... several commands.
    You could even do it manually if you felt such an inclination...

    Note: I find the images produced by dracut to be somewhat ... bloated.


    • #22
      Originally posted by cynyr View Post
      Why is there such concern over which kernel has a small performance regression in it(iin a not for production use FS no less)? Can you not upgrade kernels in ubuntu/fedora/suse/etc? Does a vanilla kernel not work? If 2.6.35 is bad for the default FS of ubuntu/$DISTRO surely they would ship 2.6.34 or some other version? If you don't like changes, run one of the long term kernels, take your pick of older kernels listed as stable on 2.6.34.x, 2.6.33.x, 2.6.27.x All of these receive backported fixes for bugs and security issues.

      I'm sure i'm missing something as i switched to Gentoo some 7 years ago, as I grumpy at not being able to use a vanilla kernel with some DRM patches on redhat(it was redhat then) and suse. It sure would be nice if someone made a "make config" option for the kernel, but gentoo has genkernel and it tends to work. Do ubuntu/fedora kernels have config.gz support turned on? if so it should be very easy to rebuild a kernel. Although I'm guessing that ubuntu/etc use initramfs-es these days, making it a bit harder to make your own kernel. Is there a reason to always use the provided Ubuntu kernel? or is it imposable to use a non ubuntu packaged kernel?

      Really though, I'm curious why it's always "THE SKY IS FALLING" sort of news related to some version/check-in of the kernel as it relates to ext4 or btrfs. Don't get me wrong, I like to see people testing new code, and if i had more time/hardware I would be as well.
      I'm not sure what you are complaining about in particular, and I don't think it's really about if it's easy to build a new kernel or not.

      I commented that I thought that it was not really helpful to post a benchmark which you well know will show very poor btrfs performance when this is not indicative of the filesystem in general, but of a bug in the specific kernel you are benchmarking.... most people will just look at the graphs and not realize that this was the expected outcome or why.

      As for the up-coming distributions, most people will not want to take such a vital piece of the system out of the mechanism for auto-update etc, etc .. and some binary drivers are dependent on kernel version... not to mention that installing on BTRFS with this bug can take anything up to 10 times longer than installing on another filesystem... some people have reported more than 10 hours to install, you cannot install a different kernel before installing!

      TBH, it's really still an experimental filesystem, regardless of how it's billed... it's been incredibly stable for me, but until the tools ship with a working "fsck" tool... what can I say. But, it needs people to want to start testing, developing tools to take advantage of the features and just getting to know how to manipulate the filesystem... and the unfortunate fact that this kind of bug slipped into the kernel which will be used for the next round of distributions will set back people's willingness to do that, and hence the uptake of the filesystem.

      Of course, that may be a good thing in the long run... 6 months more to stabalise before widespread testing could be a good thing, a working 'fsck', as well as 'raid5/6' more debugging and optimisation etc etc... but it won't help with getting it into peoples hands to develop tools which make use of it for inclusion in later distributions... and without these kinds of tools the filesystem doesn't provide the advantage that it has the potential to.


      • #23
        using iozone to claim that btrfs with compression have better performance is bulls*it. iozone uses very simple pattern for writing so it is no big miracle that this data compresses so well. Please use realistic benchmarks, not microbenchmarks (which are usefull, but needs good interpretation).


        • #24
          I actually tested

          I have a computer with 2 SSDs. one is 2~4x faster than the other... so i installed debian7 on the faster one, and created just one ext4 and then one btrfs partition on the slow one.

          I did some tests with running apache from there and also from running virtual machines (and counting the boot time plus visual studio compile time). the difference was minimal, but they were there, and they are the same of the more discrepant test i'm going to relate here:

          the naive FS benchmark test! simply copying /usr to the new drive. First with the empty drive, then with the drive filled with the previously copy. twice.

          ext4 did it in 4m. 5m40. 6m02. (i didn't write the exact seconds for the 2 first tests before closing the term.)

          brtfs did it in 4m. 4m20. 4m36.

          brtfs with lzo compression did it in >6m on the first two passes so i ignored it.... i was planning to write zeroes to the whole disk and then doing the copy with and without compression to see how much it actually impact the media usage, but after that performance hit i gave up.


          So this is the result. ext4 and btrfs. Untunned. debian7 default. btrfs without compression is faster for file copy and arguably running VMs from.

          Update: re-run the tests with noauto. brtfs no compression now runs at the same time as before. a little worse :/ can't explain. and ext4 is down to 3m27, 3m32. it is the clear winner if you are just setting up a new laptop with SSD and do not want to overthink too much.

          Update 2:
          testing only reading (my vm tests had a lot of writting, now i'm just cat a bunch of small files (~1500 fiels, totaling ~160mb) to dev/nul) ext4 wins always by ~0.5s out of the total 6s
          Last edited by gcb0; 10-18-2013, 03:23 AM.


          • #25
            The benchmarks should have been done for both SSD and HDD. There might be different results.