Announcement

Collapse
No announcement yet.

Testing EXT4 & Btrfs On A Serial ATA 3.0 SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing EXT4 & Btrfs On A Serial ATA 3.0 SSD

    Phoronix: Testing EXT4 & Btrfs On A Serial ATA 3.0 SSD

    Last month I wrote a review on the OCZ Vertex 3 240GB solid-state drive, which was a very impressive Serial ATA 3.0 SSD. The performance of this solid-state drive was terrific and a huge improvement over previous-generation SATA 2.0 SSDs and over SATA 3.0 hard drives. All of that testing was done when the drives were formatted to the common EXT4 file-system type, but in this article are more benchmarks from the OCZ Vertex 3 as it's tested with Btrfs and various mount options.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Combination of mount options

    As space_cache and (ironically) nossd almost always seem to improve performance, it would be nice to know how btrfs performs with a combination of mount options. E.g., would btrfs + space_cache + nossd + compress=lzo get closer in performance to ext4 for the SQLite benchmark?

    Comment


    • #3
      Very interesting

      It's quite impressive how well Ext4 runs. Comparing the baseline BTRFS with Ext4 is a little unfair, given that it is much more complex and capable than Ext4 (thereby being inherently slower for simple tasks), but it's reassuring to see that with additional capabilities a la space_cache BTRFS is becoming competitive in several important scenarios.

      At any rate, this is a very useful and insightful article.

      What I haven't seen anywhere is a disk benchmark of encrypted volumes. What I'm really interested in seeing is how encryption affects SSDs, primarily ones that attempt to heavily compress data like those based on Sandforce controllers. AS-SSD measures raw bandwidth, but it doesn't provide any real insight as to how an SSD would work under a typical workload.

      Comment


      • #4
        Very nice test, Michael! I applaud you!

        Ext4 is not loosing much, actually it wins in many tests. Its very stable. It has fsck.
        I use ext4 with data=journal, the safest option, and its performance even on 4kb 5400 drive (30-80Mb/s) is already more than enough for desktop usage.

        Comment


        • #5
          Originally posted by spinron View Post
          Comparing the baseline BTRFS with Ext4 is a little unfair, given that it is much more complex and capable than Ext4 (thereby being inherently slower for simple tasks).
          Since the intended usage for BTRFS is to be the default install for distributions like Fedora and Ubuntu, it's not unfair at all. If it can't beat EXT4 in simple tasks and in a default configuration it doesn't belong anywhere on the desktop as a default. This just shows that BTRFS is still nowhere where it needs to be still.

          Comment


          • #6
            I wonder if the ssd mount option works better if you have the no-op or deadline I/O scheduler instead of the default CFQ. It would be an interesting thing to benchmark, sense CFQ is written to optimize performance of rotating disks. Just take btrfs v btrfs nossd v ext4 and cfq v deadline v no-op over the same tests.

            Comment


            • #7
              Originally posted by locovaca View Post
              Since the intended usage for BTRFS is to be the default install for distributions like Fedora and Ubuntu, it's not unfair at all. If it can't beat EXT4 in simple tasks and in a default configuration it doesn't belong anywhere on the desktop as a default. This just shows that BTRFS is still nowhere where it needs to be still.
              You're assuming performance is the raison d'etre for btrfs, but the big gain would be in the snapshotting. You would no longer have unrecoverable systems due updates. You would be able to safely update your system from release to release without doing a reinstall (this doesn't mean the update would work but it would mean that it would be safe). These are ease of use features that would be fantastic to have and that's ignoring the data assurances it provides.

              Comment


              • #8
                And to add to that, few desktop users run PostgreSQL

                Comment


                • #9
                  Don't we have snapshots?

                  Originally posted by liam View Post
                  You're assuming performance is the raison d'etre for btrfs, but the big gain would be in the snapshotting. You would no longer have unrecoverable systems due updates. You would be able to safely update your system from release to release without doing a reinstall (this doesn't mean the update would work but it would mean that it would be safe). These are ease of use features that would be fantastic to have and that's ignoring the data assurances it provides.
                  Don't we already have that with LVM? openSUSE is promising all sorts of snapshot support with "snapper" in their forthcoming 12.1 release:
                  In short, snapper can work with zypper or on its own to regularly snapshot the system and especially when you do upgrades. At any point, you can list the snapshots, view the changes and revert to any earlier snapshot.

                  The YaST module even allows rolling back updates/changes on a single file!
                  and one of their Google Summer of Code projects was
                  Add ext4 snapshots support to snapper
                  so you might not need to wait until Btrfs is up to par to have snapshot support.

                  Comment


                  • #10
                    The Vertex 3 is a sandforce-based drive is it not? If so, the enabling filesystem-level compression will prevent the sandforce controller from doing any compression which will have a large effect on the benchmark results. While still a fair test as it is a combination a user could end up with, I would also like to see results from a non-sandforce drive.

                    Comment

                    Working...
                    X