Announcement

Collapse
No announcement yet.

Btrfs Gets Big Changes, Features In Linux 3.14 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    sorry that i started the whole lz4 debate, there are some significant things ... just looking at the start of the list:

    Code:
    Filipe David Borba Manana (29) commits (+1856/-301):
    Btrfs: fix deadlock when iterating inode refs and running delayed inodes (+12/-7)
    Btrfs: fix send file hole detection leading to data corruption (+15/-0)
    Btrfs: remove field tree_mod_seq_elem from btrfs_fs_info struct (+0/-1)
    Btrfs: make send's file extent item search more efficient (+17/-10)
    Btrfs: fix infinite path build loops in incremental send (+518/-21)
    Things like that are probably more important than new features. And I would like to see btrfs become stable. (that's just the first few lines from the commit pull changelog)

    Comment


    • #42
      Originally posted by jwilliams View Post
      Wrong again. You seem to make a habit of stating incorrect things, and then you tell me that I need to learn how to read. Heh. You need to learn how to write, or at least how to avoid making incorrect statements.
      Where are the phones shipping btrfs? Or are you talking about some custom hacked phone you made in your basement?

      Comment


      • #43
        Originally posted by mercutio View Post
        Code:
        the difference isn't minor, as a general idea, here's fsbench on a i7-4770 cpu with 1 thread:
         # ./fsbench -b8192 -t1 lz4 zlib lzo /src/text/dickens 
        Codec                                   version      args
        C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
        LZ4                                     r97          
            7282840 (x 1.400)      363 MB/s 1859 MB/s       103e6  105e6
        zlib                                    1.2.8        6
            4670489 (x 2.182)     34.0 MB/s  219 MB/s        18e6   19e6
        LZO                                     2.06         1x1
            6969959 (x 1.462)      329 MB/s  468 MB/s       104e6  100e6
        Codec                                   version      args
        C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
        done... (4*X*1) iteration(s)).
        
        and with 8 threads:
        # ./fsbench -b8192 -t8 lz4 zlib lzo /src/text/dickens  
        Codec                                   version      args
        C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
        LZ4                                     r97          
            7282840 (x 1.400)     1979 MB/s 7951 MB/s       564e6  566e6
        zlib                                    1.2.8        6
            4670489 (x 2.182)      164 MB/s 1159 MB/s        89e6   94e6
        LZO                                     2.06         1x1
            6969959 (x 1.462)     1848 MB/s 2483 MB/s       584e6  588e6
        Codec                                   version      args
        C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
        done... (4*X*1) iteration(s)).
        it's quite obvious from those numbers, that decompression speed with lzo on a single thread is slower than a ssd, and lz4 is significantly faster than a ssd. and on older cpus the same should hold true too. i used 8k block size because that's more akin to what file systems use, and is the standard block size of ssd's these days. i have no idea what file systems are doing for matching to block sizes of hard-disks/ssds/raid arrays when using compression atm. they may not be ideal still.
        Thanks for actually providing some numbers here. But i think this backs my view up. How many people actually need to decode at more than 470MB/s speeds? How many people are going to notice if it's faster than that - even significantly faster? Again, i'm all for making the change. I just don't think most people would view it as "significant" given those numbers - except on benchmarks, where it obviously looks awesome.

        Comment


        • #44
          Originally posted by smitty3268 View Post
          Thanks for actually providing some numbers here. But i think this backs my view up. How many people actually need to decode at more than 470MB/s speeds? How many people are going to notice if it's faster than that - even significantly faster? Again, i'm all for making the change. I just don't think most people would view it as "significant" given those numbers - except on benchmarks, where it obviously looks awesome.
          Do you have any clue at all? Did you even look at the CPU in the benchmark? And you accuse me of not reading.

          Comment


          • #45
            Originally posted by smitty3268 View Post
            Where are the phones shipping btrfs? Or are you talking about some custom hacked phone you made in your basement?
            I think I have seen somewhere that Jolla use btrfs

            Comment


            • #46
              Originally posted by jwilliams View Post
              Do you have any clue at all? Did you even look at the CPU in the benchmark? And you accuse me of not reading.
              You're just spewing nonsense now. Of course i saw the CPU. A 4770 is pretty fast, but it's not that crazy. And of course, i was looking at just the single-threaded performance, just to make things fair. The numbers skyrocket if you add multi-threading.

              So does this go back to your phone argument or what?

              How about this - can you give me a straight up scenario in which you think this makes a major difference to someone? An actual real-world scenario, not just you saying "look at the benchmark numbers".

              Most filesystems don't even have built-in compression, and btrfs already has a pretty good implementation. Why do you think the current one is so deficient that an upgrade to it will result in huge user improvements? Personally, i'm glad he's still focusing on things like crashes and infinite loops, because that seems a heck of a lot more important to me.

              Comment


              • #47
                Originally posted by Akka View Post
                I think I have seen somewhere that Jolla use btrfs
                Hmm, interesting. I can't seem to find anything official, but there do seem to be some comments out there that Jolla is using btrfs.

                I could see how lz4 might be interesting for them.

                The good news is, it should be extremely easy for them to implement it if they think it will help their product. And since they are just starting out in a highly competitive market, they can use every little advantage they can find. Seems like an easy win for them - unless of course they determine that it doesn't really make a difference in average phone usage. In which case they probably won't bother, but nobody will be missing it anyway.

                Comment


                • #48
                  Originally posted by mercutio View Post
                  Code:
                  the difference isn't minor, as a general idea, here's fsbench on a i7-4770 cpu with 1 thread:
                   # ./fsbench -b8192 -t1 lz4 zlib lzo /src/text/dickens 
                  Codec                                   version      args
                  C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
                  LZ4                                     r97          
                      7282840 (x 1.400)      363 MB/s 1859 MB/s       103e6  105e6
                  zlib                                    1.2.8        6
                      4670489 (x 2.182)     34.0 MB/s  219 MB/s        18e6   19e6
                  LZO                                     2.06         1x1
                      6969959 (x 1.462)      329 MB/s  468 MB/s       104e6  100e6
                  Codec                                   version      args
                  C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
                  done... (4*X*1) iteration(s)).
                  
                  and with 8 threads:
                  # ./fsbench -b8192 -t8 lz4 zlib lzo /src/text/dickens  
                  Codec                                   version      args
                  C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
                  LZ4                                     r97          
                      7282840 (x 1.400)     1979 MB/s 7951 MB/s       564e6  566e6
                  zlib                                    1.2.8        6
                      4670489 (x 2.182)      164 MB/s 1159 MB/s        89e6   94e6
                  LZO                                     2.06         1x1
                      6969959 (x 1.462)     1848 MB/s 2483 MB/s       584e6  588e6
                  Codec                                   version      args
                  C.Size      (C.Ratio)        E.Speed   D.Speed      E.Eff. D.Eff.
                  done... (4*X*1) iteration(s)).
                  it's quite obvious from those numbers, that decompression speed with lzo on a single thread is slower than a ssd, and lz4 is significantly faster than a ssd. and on older cpus the same should hold true too. i used 8k block size because that's more akin to what file systems use, and is the standard block size of ssd's these days. i have no idea what file systems are doing for matching to block sizes of hard-disks/ssds/raid arrays when using compression atm. they may not be ideal still.
                  What percentage of data on your system is an uncompressed book? I'm not taking any sides, I have no idea if it is or not worthwhile, but this benchmark is not that usefull.
                  Do we have boot times? boot time of a virtual machine stored on a btrfs fs? time to backup a system disk? a user data disk, with steam games (maybe compressible) and video/pictures (incompressible)? Compilation times without cache, or time to clone/update a big git repo?

                  I don't doubt that LZ4 is faster, the question is, is it the bottleneck, and what gains can we expect in real life.

                  Comment


                  • #49
                    Originally posted by erendorn View Post
                    What percentage of data on your system is an uncompressed book? I'm not taking any sides, I have no idea if it is or not worthwhile, but this benchmark is not that usefull.
                    Do we have boot times? boot time of a virtual machine stored on a btrfs fs? time to backup a system disk? a user data disk, with steam games (maybe compressible) and video/pictures (incompressible)? Compilation times without cache, or time to clone/update a big git repo?

                    I don't doubt that LZ4 is faster, the question is, is it the bottleneck, and what gains can we expect in real life.
                    Yes, that was always part of my implied argument - that compressed fs have their place but don't provide anything like the boost in general usage that contrived benchmarks would seem to imply. Thanks for expressing it more clearly.

                    I do think executable files might show a bit of a boost for booting/app startup, but obviously it's not going to be to the degree shown here.
                    Last edited by smitty3268; 01 February 2014, 06:46 AM.

                    Comment


                    • #50
                      Originally posted by smitty3268 View Post
                      Yes, that was always part of my implied argument - that compressed fs have their place but don't provide anything like the boost in general usage that contrived benchmarks would seem to imply. Thanks for expressing it more clearly.

                      I do think executable files might show a bit of a boost for booting/app startup, but obviously it's not going to be to the degree shown here.
                      And actually, as a btrfs+lzo user, I'm genuinely interested in these numbers

                      Comment

                      Working...
                      X