Announcement

Collapse
No announcement yet.

FreeBSD ZFS vs. Linux EXT4/Btrfs RAID With Twenty SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • FreeBSD ZFS vs. Linux EXT4/Btrfs RAID With Twenty SSDs

    Phoronix: FreeBSD ZFS vs. Linux EXT4/Btrfs RAID With Twenty SSDs

    With FreeBSD 12.0 running great on the Dell PowerEdge R7425 server with dual AMD EPYC 7601 processors, I couldn't resist using the twenty Samsung SSDs in that 2U server for running some fresh FreeBSD ZFS RAID benchmarks as well as some reference figures from Ubuntu Linux with the native Btrfs RAID capabilities and then using EXT4 atop MD-RAID.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    ZFS Linux benchmarks will come when the upcoming ZOL 0.8 release is available.
    This is probably going to take MONTHS: just test 0.7.12.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #3
      I know you were going with the default options for fairness sake, but those really screw ZFS on database benchmarks.

      I'd personally be happy with the current 0.7.12 stable or 0.8.0rc2\master for ZoL tests. My root drive has been on 0.8 since rc1 and is currently up-to-date with their git master.

      Does FreeBSD use ashift=8 or ashift=12 by default? If it's ashift=8, that could explain the 4K write test results.

      Comment


      • #4
        Some of those BTRFS results are hilariously bad! Looks like there's some real room for optimization there. ZFS is still slower with single disk though.

        Next time someone asks why the hell I'm running BTRFS on top of hardware raid... I'll point them to these benchmarks.

        Note: I mostly use BTRFS myself, but I do have a few FreeBSD systems on ZFS, and I'm quite happy with both.

        Comment


        • #5
          BTRFS better than EXT4 for database benchmarks? That's new!

          Comment


          • #6
            20 disks in a single vdev is suboptimal

            Comment


            • #7
              I wonder how XFS would perform. Red Hat has been investing heavily into that for server workloads that assume RAID and now they're also building Stratis on top of it, which is also RAID-oriented.

              Comment


              • #8
                I get why people like to do these comparisons with the latest, fastest stuff they have available.... But when you want to know how lower spec systems will perform, it's almost impossible to find a decent comparison.

                I'd be interested in seeing a similar comparison test using an i3 or similar, about 4GB of RAM, 2 or 4 basic everyday 4-8TB HDDs. Most home users looking at home brew NASes would probably like to know how to save on CPU and RAM while maximizing performance of the storage cost.

                Comment


                • #9
                  I'm disappointed in the number of ZFS comparison benchmarks get published without discussing the FS implementation use of RAM. Phoronix is not the only one that has done this but I expected Phoronix to know better.

                  Try settings up a server dedicated to Postgresql and try to optimize RAM usage of the database (upping max_connections, shared_buffers, effective_cache_size, etc) on a system running ext4 or xfs. Once you get that tuning to take full advantage of the RAM in the database application, move the same configuration over to a ZFS setup. The result I get is a system will thrash because ZFS takes a great deal of the RAM for itself and Postgresql's tuning to attempt to use the same RAM causes the system into swapping. If you reduce that impact by lowering the Postgresql optimization parameters, you end up with a system that doesn't provide the same performance as the ext4 or xfs configuration. ZFS demand that memory be used for file system caching instead of application caching ultimately results in a poorly tuned database server configuration.

                  Even worse is if you need large amount of storage for the database server. ZFS stands for "Zettabyte file system" which is ironic given how poorly it actually scales in real world terms. With 12TB hard drives available, it is not hard to build an petabyte array. According to the ZFS rule of thumb of providing 1GB of RAM for every 1TB of disk, that petabyte array should be used with a system that has 1,000 gigabytes of RAM?!?! Majority of server motherboards I have worked with top out around below a fifth of that!

                  Lastly, it seems like with the release candidates of RHEL 8 that Red Hat is strongly pushing XFS with a btrfs like configuration interface provided by Stratis Storage. When doing FS comparisons, it would be nice if XFS was also included in the benchmarking. And again, it would be nice to see the amount of RAM left available for application services to take advantage of and how much RAM is monopolized with the FS kernel module.

                  Comment


                  • #10
                    Originally posted by chilinux View Post
                    I'm disappointed in the number of ZFS comparison benchmarks get published without discussing the FS implementation use of RAM. Phoronix is not the only one that has done this but I expected Phoronix to know better.

                    Try settings up a server dedicated to Postgresql and try to optimize RAM usage of the database (upping max_connections, shared_buffers, effective_cache_size, etc) on a system running ext4 or xfs. Once you get that tuning to take full advantage of the RAM in the database application, move the same configuration over to a ZFS setup. The result I get is a system will thrash because ZFS takes a great deal of the RAM for itself and Postgresql's tuning to attempt to use the same RAM causes the system into swapping. If you reduce that impact by lowering the Postgresql optimization parameters, you end up with a system that doesn't provide the same performance as the ext4 or xfs configuration. ZFS demand that memory be used for file system caching instead of application caching ultimately results in a poorly tuned database server configuration.

                    Even worse is if you need large amount of storage for the database server. ZFS stands for "Zettabyte file system" which is ironic given how poorly it actually scales in real world terms. With 12TB hard drives available, it is not hard to build an petabyte array. According to the ZFS rule of thumb of providing 1GB of RAM for every 1TB of disk, that petabyte array should be used with a system that has 1,000 gigabytes of RAM?!?! Majority of server motherboards I have worked with top out around below a fifth of that!

                    Lastly, it seems like with the release candidates of RHEL 8 that Red Hat is strongly pushing XFS with a btrfs like configuration interface provided by Stratis Storage. When doing FS comparisons, it would be nice if XFS was also included in the benchmarking. And again, it would be nice to see the amount of RAM left available for application services to take advantage of and how much RAM is monopolized with the FS kernel module.
                    You mentioned tuning Postgresql to optimize for memory usage. Likewise with ZFS. There are many parameters which can help optimize if you're operating in a memory-constrained [or memory contended] system, notably arc_max, which will limit how much memory ZFS can use for it's caching. I don't think you can talk about tuning postgres, then complain when you haven't done the same for ZFS.

                    That memory 'rule-of-thumb' with ZFS is when using deduplication, which isn't something a lot of people need or use. If you're wanting to use de-dup on a petabyte worth of storage, on a system with 84 hard drives in a single vdev, I'd say 1TB of memory isn't exactly crazy.

                    Using edge-cases to argue against mainstream use of something seems like grasping at straws to me. If you just don't like ZFS, then that's fine I suppose. Just state it as it is.

                    Comment

                    Working...
                    X