Announcement

Collapse
No announcement yet.

Where The Btrfs Performance Is At Today

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Which i/o scheduler was this? cfq? Would be nice to have results with both noop and deadline as they are reported to be better with SSDs.

    Also, old fashioned mechanical spinning disks benchmarks please

    CPU usage would be great too.

    Comment


    • #32
      I don't know if this has actually been said, but I wouldn't make too much of the performance advantage when enabling compression. Synthetic benchmarks, when generating data to test with, probably tend toward redundancy, with lots of patterns and repetition. This kind of data is much more compressable than "real life" data can be. For (a very naive) example, even a simple RLE compression scheme could effectively reduce disk I/O from "00000000" to "80" (eight zeros.)

      Comment


      • #33
        Originally posted by MaxToTheMax View Post
        I don't know if this has actually been said, but I wouldn't make too much of the performance advantage when enabling compression. Synthetic benchmarks, when generating data to test with, probably tend toward redundancy, with lots of patterns and repetition. This kind of data is much more compressable than "real life" data can be. For (a very naive) example, even a simple RLE compression scheme could effectively reduce disk I/O from "00000000" to "80" (eight zeros.)
        compression can actually make a tremendous difference. while i don't know how the benchmarks work, your system is chock full of text files and other highly compressible data. little is in pre-compressed formats such as images/videos/etc. thus the compression can drastically improve the read times and some seek times of the disk, SSD or rotating. however, the CPU impact is visible for zlib compression, which is rather CPU heavy. this will change noticeably once btrfs supports alternate compression algos, notably LZMA(2).

        the apache test looks flawed to me.

        i've been using btrfs on my primary system (Arch) for over a year with no noticeable degradation in performance, and all the benefits of system rollback and per-volume mount options.

        in case anyone didn't notice an earlier post (from the btrfs wiki)....

        mount -o nodatacow also disables compression
        the btrfs compress+nodatacow was totally pointless and misleading.

        Comment


        • #34
          Benchmarks can be as synthetic as they get, but if it can't keep up with what's already there, it has a long road ahead.

          I might not have any bandwidth and latency sensitive applications here, I still don't want a software component waste too much time for nothing, no matter how much cores are idling. It just feels wrong.

          Comment


          • #35
            Originally posted by kebabbert View Post
            I didnt understand that sentence. Could you please rephrase?
            Read the reply to this post.

            Ways to crash any computer:

            1) Stop all power
            2) Hardware breakdown inside the computer
            3) Cable erodes, or loosens
            4) On most personal computers, hold down the TURN-ON/OFF button for five seconds or longer.

            This last 4) mechanism is when there is done when virus/ trojan/ mistakes/crashes happen, but there seems no time or fear of further damage. I do it often (weekly).

            Greg Zeng, Australian Capital Territory.

            Comment


            • #36
              Originally posted by paravoid View Post
              Interesting results. Indeed I also suspect some performance impact on btrfs compress mode at least.
              Also, I would be curious about the outcome with at least two additional kernel versions: stable vanilla (2.6.34) and development (2.6.35-rc2 or better current git head).

              Thanks!
              I prefer a comparison with NTFS-COMPRESSED. My data is best using Windows applications, on NTFS ((not NTFS-3G; AFAIK is not compressed).

              With the arrival of the new HHDs (Seagate Momentus XT, & others, with inbuilt smart SDD, etc), this will distort all these benchmarks.

              Greg Zeng. Australian Capital Territory

              Comment


              • #37
                Originally posted by MaxToTheMax View Post
                I don't know if this has actually been said, but I wouldn't make too much of the performance advantage when enabling compression. Synthetic benchmarks, when generating data to test with, probably tend toward redundancy, with lots of patterns and repetition. This kind of data is much more compressable than "real life" data can be. For (a very naive) example, even a simple RLE compression scheme could effectively reduce disk I/O from "00000000" to "80" (eight zeros.)
                I registered here to express the same idea and I agree -- any result from the compression test using synthetic benchmarks is spurious. Likely these programs dump zeros into a file to test the write performace, the equivalent of 'dd if=/dev/zero of=testfile'. As you said, this data is enormously compressible and results for "real" data is nowhere near as favourable.

                Comment


                • #38
                  Originally posted by extofme View Post
                  compression can actually make a tremendous difference. while i don't know how the benchmarks work, your system is chock full of text files and other highly compressible data. little is in pre-compressed formats such as images/videos/etc. thus the compression can drastically improve the read times and some seek times of the disk, SSD or rotating.
                  As you say, compression can improve access times even though it requires additional processing. Accessing disks is slow, and the time required to (de)compress the data is far, far less than the time required to access extra blocks on disk. Enabling compression on WindowsXP / NTFS partitions results in faster boot times, for example, because system binaries are compressible.

                  Max is correct though; the result of compression on real data will not be as drastic as the benchmark results in this article imply. While there will likely be a performance improvement on real data (such as system binaries) the results shown in the article are misleading as to how much of a gain you're likely to achieve by enabling compression.

                  however, the CPU impact is visible for zlib compression, which is rather CPU heavy. this will change noticeably once btrfs supports alternate compression algos, notably LZMA(2).
                  You have this backwards. Zlib is a lightweight algorithm and is frequently used for on-the-fly compression. It's also used in embedded applications where CPU time and battery power are limited. LZMA, on the other hand, gives much better compression and is more computationally expensive.

                  Comment


                  • #39
                    Where The Btrfs Performance Is At Today
                    Better question is: Where is Btrfs stability today?

                    Comment

                    Working...
                    X