Announcement

Collapse
No announcement yet.

Testing Out Btrfs In Ubuntu 10.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing Out Btrfs In Ubuntu 10.10

    Phoronix: Testing Out Btrfs In Ubuntu 10.10

    Yesterday we reported that Ubuntu 10.10 gained Btrfs installation support and since then we have been trying out this Btrfs support in Ubuntu "Maverick Meerkat" and have a fresh set of Btrfs benchmarks to serve up.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Mmmh, still no cpu usage stats.

    Can someone illuminate me, where does brtfs come from and what's the main difference from ext3/4 so that it appears to perform someway better than ext4?

    Comment


    • #3
      Cpu stats

      We really need to see CPU charts to see if 'this' is better then 'that'.
      I know compression takes CPU time, it has to. Now the question is how much?
      Think about Atom CPU's and a FS that uses compression, ouch.

      Would be nice to have near real time CPU charts with every test.

      One test could show something that is on the slow side, but uses 2x less CPU time.

      And I dont think it would be to hard to do RAM usage on top while your doing CPU.

      Comment


      • #4
        Btrfs compression ??

        Can any one tell me what is that called "Btrfs compression" ?

        Comment


        • #5
          compression

          Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller? And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)

          And as the others have asked, I would guess the CPU usage is very important with compression enabled. And along with the CPU usage, would it support parallel processing? Because I rarely use both the cores on my computer since most tasks can't, and if can harness the processing from the core that would have been idle anyway during a single-threaded task, then that would be awesome as well.

          Excited for a new FS!
          jetpeach

          Comment


          • #6
            Btrfs info

            Btrfs is intended to be more scalable and flexible than ext3/4. Essentially, ext4 is a stop-gap measure to get some more out of the ext design so that Btrfs has time to mature. Some of the things Btrfs supports and ext4 does not include quick and easy snapshots (better than LVM snapshots, since they know about the filesystem structure), online defragmentation, and online growing and shrinking. Btrfs, once complete, should do pretty much everything that ZFS does and some things that ZFS doesn't.

            jetpeach:
            The compression does make the files stored on the disk smaller, and its effectiveness does depend on the type of file.

            Comment


            • #7
              A Short History of btrfs

              ?A short history of btrfs? (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/

              Comment


              • #8
                Btrfs provides the foundation for many useful features.

                Snapshots are point in time data captures. Most would recognize system rollback which depends on snapshots.

                Backup, the feature most people don't do until the data is lost. This backup is almost instantaneous.

                When combined with distributed data storage systems like CEPH, you get replication, protection and performance.

                Comment


                • #9
                  Originally posted by Blue Beard View Post
                  ?A short history of btrfs? (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/
                  That's a great article on btrfs, everyone should read it.

                  btrfs (btree file system) really isn't being created to increase performance over existing file systems, the idea is to get a bunch of really cool new features, and try to make it all optimized enough to keep it from slowing down.

                  Comment


                  • #10
                    Originally posted by jetpeach View Post
                    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller?
                    Yes. Although the main reason to do this is to reduce the amount that has to be read from the disk and therefore the number of seeks, speeding up access by relying on a fast cpu rather than a slow hdd resource to do the majority of the work.

                    And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)
                    Yes. Especially when it comes to artificial benchmarks, since they might just rely on writing all zeroes or ones out to the hdd which are more compressible than anything you'd run into in real life. Actually, it seems like they could probably make the file system smart enough to heuristically stop compressing files that are already compressed (like video) in order to avoid the performance penalty. I don't have any idea if that's already being done or not.

                    Comment

                    Working...
                    X