Announcement

Collapse
No announcement yet.

Testing Out Btrfs In Ubuntu 10.10

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by kebabbert View Post
    Do you really expect ZFS development to freeze in time, until BTRFS plays catch up? Just recently ZFS Dedup was added. I wonder what functionality will be added in a couple of years.

    BTW, it takes decades to iron out all bugs in a file system. It takes at least 5 years after announcing v1.0 before anyone use it in production. ZFS was officially announced after development in secrecy, and after that it took several years before it was let into production. When BTRFS is v1.0 it will take several years before any trusts it, in production.

    As someone said "filesystems should not be sexy. It should be boring and trusted technology" - implying that he will not let ZFS into his computer halls, before at least 10 years has passed and ZFS has become mature enough.
    Well, file systems are not bottle of wine, to stay in the corner and get better with time. They have to be used to get mature...

    Comment


    • #17
      I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
      To understand what I'm talking about see previous benchmarks on Btrfs.

      Comment


      • #18
        Originally posted by kebabbert View Post
        BTW, it takes decades to iron out all bugs in a file system. It takes at least 5 years after announcing v1.0 before anyone use it in production. ZFS was officially announced after development in secrecy, and after that it took several years before it was let into production. When BTRFS is v1.0 it will take several years before any trusts it, in production.
        I disagree --> It takes a very large amount of satisfactory user testing to get acceptance.

        Few users --> decades

        Comment


        • #19
          it would be nice to have a cpu load test showing the difference between brtfs and brtfs + encryption

          Comment


          • #20
            IOzone writes zeros for its benchmarks, so testing iozone with btrfs compression enabled has zero sense.

            Comment


            • #21
              Originally posted by cl333r View Post
              I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
              To understand what I'm talking about see previous benchmarks on Btrfs.
              If other tested file systems didn't flush data to the disk and if btrfs did we don't know how it performs. Look at zfs tests at Phoronix, it was very slow here.

              Comment


              • #22
                Ok Phoronix, did you manage to get ubuntu installed onto a btrfs partition that was mounted with compress or did you just run the compress tests later? the XUBUNTU daily build has btrfs, but I didn't see compress in the mount options.

                Comment


                • #23
                  Originally posted by cl333r View Post
                  I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
                  That's probably a consequence of copy on write combined with aggressive syncing: that will pretty much ensure that your database file will end up spread around the disk in small chunks with massive fragmentation.

                  Comment


                  • #24
                    Originally posted by jetpeach View Post
                    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller? And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)

                    And as the others have asked, I would guess the CPU usage is very important with compression enabled. And along with the CPU usage, would it support parallel processing? Because I rarely use both the cores on my computer since most tasks can't, and if can harness the processing from the core that would have been idle anyway during a single-threaded task, then that would be awesome as well.

                    Excited for a new FS!
                    jetpeach


                    It would not be compression if the things were not actually smaller. I believe that the compression is only applied to small files, as larger ones are usually precompressed.

                    Comment


                    • #25
                      I really wish these benchmarks included at least one of the older non-ext filesystems in them. Please, throw in JFS (my fave) or at least XFS or Reiser3, the next time you do one of these. When I look at these articles, it's because I'm wondering, "How long until I start using one of the newer filesystems?" so I wanna see how the cool new stuff compares to the old stable stuff.

                      ext4 isn't btrfs' only "competitor." Don't forget the legacies, because realistically, I think that's where most of us are, right now.

                      Comment


                      • #26
                        Originally posted by Blue Beard View Post
                        I disagree --> It takes a very large amount of satisfactory user testing to get acceptance.

                        Few users --> decades
                        If you are talking about production systems at Large Enterprise companies, the only matter is if they can trust the technology. They only run old mature stuff. Never new bleeding edge stuff. For these companies it does not matter if there are many users or not, as long as the technology is mature and safe.

                        But sure, if you talk about home users, then it is a different thing.

                        Comment


                        • #27
                          Wikipedia explains many unknowns listed above. BTRFS seems to me a primitive version of the closed-source M$ NTFS-COMPRESSED.

                          All my data & archive partitions on all drives are in M$ NTFS-COMPRESSED partitions. I have few trouble reading/ writing to these M$ NTFS-COMPRESSED partitions.

                          I expect that like M$ claims, "compression" has negligible effects on the CPU. Drive I/O speeds are usually slow, so M$ NTFS-COMPRESSED speeds up this slowest part of computer usage.

                          Linux-only file systems wrongly claim that they never need defragmenting. Luckily M$ Windows has many free defrag & undelete programs.

                          Pity that NTFS-3G cannot match the official NTFS: no encryption, no compression. There are a few different types of M$ NTFS types.

                          Using Linux benchtest programs, you can easily test the many file partition types on any hard disk drive. Create 4GB partitions close to each other on your drive. Repeat the first 4GB partion again in the last part of the 4GB partition group. This will show you the effect of cylinder speeds for slight partition differences.

                          Once you've done the tests, it is very easy to remove these unnecessary partitions.

                          Retired (medical) IT Consultant, Australian Capital Territory

                          Comment

                          Working...
                          X