Announcement

Collapse
No announcement yet.

FreeBSD ZFS vs. TrueOS ZoF vs. DragonFlyBSD HAMMER2 vs. ZFS On Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Michael, did you do any tuning of Postgresql shared buffers or ZFS? From the FreeBSD manual if you're using the system running ZFS for anything other than as a file system then you should do some basic tuning of the ZFS values, specifically the vfs.arc_max value
    ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software

    Comment


    • #12
      Why is it so faster on BSD compared to ZOL?
      ## VGA ##
      AMD: X1950XTX, HD3870, HD5870
      Intel: GMA45, HD3000 (Core i5 2500K)

      Comment


      • #13
        Originally posted by darkbasic View Post
        Why is it so faster on BSD compared to ZOL?
        Because of OpenZFS being an integral part of FBSD for over a decade?

        ZoL is not even mainlined for Linux and ZoF/ZoL has barely been ported to BSD. Prolly more than little debug code left running there in TrueOS during tests.

        Give it some more time.

        Comment


        • #14
          Originally posted by richardnpaul View Post
          Michael, did you do any tuning of Postgresql shared buffers or ZFS? From the FreeBSD manual if you're using the system running ZFS for anything other than as a file system then you should do some basic tuning of the ZFS values, specifically the vfs.arc_max value
          https://www.freebsd.org/doc/handbook/zfs-advanced.html
          Quote from original article:
          Each operating system was tested in its out-of-the-box/default configuration except where otherwise noted.

          Comment


          • #15
            Originally posted by darkbasic View Post
            Why is it so faster on BSD compared to ZOL?
            ZFS is not you regular FS where you format and boom, ZFS is really complex but extremely flexible depending what you need, so this test are not wrong to check out the OOB performance but they are not showing any sort of real world usage scenario. If you come here to check ZFS for performance you probably don't need ZFS at all since you obviously don't have enough understanding of how ZFS works.

            About performance this is very subjective and heavily depends how the pool/volumes were created and the OS, for example BSDs tend to use certain defaults whereas linux ZFS require those default to be passed on at pool creation, for example:
            zpool create -f -o ashift=12 ... (for HDD)
            zpool create -f -o ashift=13 ... (for SSD)

            if ashift is wrong at pool creation it can severely affect performance depending of you disk type.

            Another performance consideration will heavily depend how you configure your caches L2ARC, ZIL and SLOG and the amount/speed of your RAM and the type of RAID you choose.

            Also depending on your CPU you need some testing to get the best performance possible from the proper compression algorithm, use relatime instead of atime for warm and hot storage. Also take into account that enabling dedup without enough RAM will destroy your I/O over time Also on ZFS is a very bad idea to just format a disk and start enabling services over the main ZFS pool, you should always create proper volumes(quotas are up to you if necessary) and configure them for your services, for example:

            Postgresql: zfs create -o recordsize=8K -o primarycache=metadata -o mountpoint=/var/lib/postgres -o logbias=throughput <pool>/postgres(bold are important)

            /tmp and the likes:
            zfs set sync=disabled <pool>/tmp
            zfs set setuid=off <pool>/tmp
            zfs set devices=off <pool>/tmp

            Samba:
            zfs set acltype=posixacl
            zfs set xattr=sa

            Swap: i recommend here bypass ZFS for swap and use ZSWAP kernel infrastructure specially if you using rotating disks.

            Also take into account that ZFS is really smart with writes, like genius smart(specially with compression and dedup on) and this will trow out of the window any regular filesystem benchmark since they use random generated data that is not guaranteed to always be different hence ZFS result will vary wildly between runs. is always better to use ZFS internal tools to measure performance instead, note this also will happen on reads and vary wildly with your configuration as well.

            For example: lets say benchmark A generate in parallel 10+ 10gb write files with random data, lets say you have a repeat rate of 70% and all files are heavily compatible with LZ4, if you check ZFS will show 100gb on that test folder(on nautilus for example) but when you check the actual disk space used you will notice only 900MB are actually used on the drives but if you check really hard depending on the RAIDZ you use the actual data is distributed all over the place including RAM/ZIL/L2ARC(if ZFS aware the data will be read soon) so if you try to read those 10 files 10 times in a row, you will get something like this:

            pass1 : 1Gbps because CPU usage was minimum and disk 2 and 5 on the array were mostly unused and the decompression used ZIL/L2arc/RAM a lot

            pass2 : 400Mbps cuz high CPU usage and the cache flushed before this read

            pass3 : 25Gbps cuz no activity since last read everything is on RAM and caches atm

            ....

            and so on and i can literally keep this going for hours and hours, so as you can see just create an ZFS partition and run a bunch of non ZFS aware benchmarks will drive you crazy if you don't understand how ZFS actually works. check ZFSonLinux WIKI and other ZFS aware sites before you just jump into ZFS for the first time.

            OP please note i'm not talking directly at you but generically to any reader
            Last edited by jrch2k8; 28 January 2019, 09:55 AM. Reason: fixed layout for some reason all got crapped togheter

            Comment


            • #16
              Originally posted by jrch2k8 View Post

              ZFS is not you regular FS where you format and boom, ZFS is really complex but extremely flexible depending what you need, so this test are not wrong to check out the OOB performance but they are not showing any sort of real world usage scenario. If you come here to check ZFS for performance you probably don't need ZFS at all since you obviously don't have enough understanding of how ZFS works.
              I do know how ZFS works and how to tune it: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/
              The point is that, optimized or not, it should perform in a similar manner with similar settings. Somehow ZOL seems to be just slower when you compare default settings of both OSes. So my question is: is there anything which could explain such a big difference or ZOL is just way less optimized?
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment


              • #17
                Originally posted by darkbasic View Post

                I do know how ZFS works and how to tune it: http://www.linuxsystems.it/2018/05/o...t4-benchmarks/
                The point is that, optimized or not, it should perform in a similar manner with similar settings. Somehow ZOL seems to be just slower when you compare default settings of both OSes. So my question is: is there anything which could explain such a big difference or ZOL is just way less optimized?
                from my experience on FreeBSD/NAS since it uses it by default the installer have a set of sane general defaults(like proper ashift settings) whereas ZoL is out of tree and basically have 0 easy user defaults, it would require some testing using CLI to create 2 volumes without any -o attribute on both OS to see a real difference.

                Also notice L2ARC have different defaults on both OS, so that also would need some standardization to be sure.

                I do expect differences either way since both OS handle memory differently enough and the actual disk controller could make a difference as well to either side.

                but also the only way you have to get real performance metrics is with zpool iostat and company, any regular phoronix benchmark will produce wildly different result based on a bunch of factors, so we don't actually know if the performance difference is actually real or if it generated falsely by in ZFS optimization and system state at the moment of the benchmark.

                Either way in the past i have used zpool iostat on similar hardware between freeNAS and ArchLinux and the difference wasn't easily noticeable when using similar options after optimization, as far as i know the only ZFS implementation that usually underperform is the OpenZFS on OS X but i suppose that was some OS X weirdness since it was based on the same tree of FreeBSD/ILLUMOS(may also have changed recently)

                Comment


                • #18
                  Originally posted by jrch2k8 View Post

                  from my experience on FreeBSD/NAS since it uses it by default the installer have a set of sane general defaults(like proper ashift settings) whereas ZoL is out of tree and basically have 0 easy user defaults, it would require some testing using CLI to create 2 volumes without any -o attribute on both OS to see a real difference.

                  Also notice L2ARC have different defaults on both OS, so that also would need some standardization to be sure.

                  I do expect differences either way since both OS handle memory differently enough and the actual disk controller could make a difference as well to either side.

                  but also the only way you have to get real performance metrics is with zpool iostat and company, any regular phoronix benchmark will produce wildly different result based on a bunch of factors, so we don't actually know if the performance difference is actually real or if it generated falsely by in ZFS optimization and system state at the moment of the benchmark.

                  Either way in the past i have used zpool iostat on similar hardware between freeNAS and ArchLinux and the difference wasn't easily noticeable when using similar options after optimization, as far as i know the only ZFS implementation that usually underperform is the OpenZFS on OS X but i suppose that was some OS X weirdness since it was based on the same tree of FreeBSD/ILLUMOS(may also have changed recently)
                  Not to mention, the pool features enabled by default on each pool will be different between FreeBSD ZFS, ZFS-on-Linux, and ZFS-on-FreeBSD. There are a few features that ZoL supports (like large dnodes and encryption and other stuff) that FreeBSD ZFS doesn't, and vice versa (like TRIM and NFSv4 ACLs), and that aren't enabled in ZoF yet.

                  In order to do a proper apples-to-apples comparison between the three ZFS implementations, you'd need to find the common feature set and only enable those on pool creation. Then, once you have those results, compare them to the default features for each implementation to see how the defaults affect things.

                  Not to mention, benchmarking ZFS on a single disk (SSD or not) is pretty much pointless and useless. Much better to do it on a variety of configurations (simple 2-disk mirror, simple pair of 2-disk mirrors [aka RAID10], 6-disk raidz2, pair of 6-disk raidz2 [aka RAID60]), and compare that to a similar setup on Linux using mdadm and/or LVM, with different filesystems on top.

                  These throwaway "out-of-the-box-because-I-can't-be-bothered-to-think" "benchmarks" are just that: throwaway useless.

                  Comment


                  • #19
                    Originally posted by jrch2k8 View Post

                    ZFS is not you regular FS where you format and boom, ZFS is really complex but extremely flexible depending what you need, so this test are not wrong to check out the OOB performance but they are not showing any sort of real world usage scenario. If you come here to check ZFS for performance you probably don't need ZFS at all since you obviously don't have enough understanding of how ZFS works.


                    OP please note i'm not talking directly at you but generically to any reader
                    There are people out there, who, for various reasons, run production servers without dedicating their lives to be a full-time server-administrator. For example, I’m running about fifteen production FreeBSD servers that serve millions of users per month. I do not have time to dig into every detail of the OS. I don’t have hundreds of lines on my configs that tune the OS, because I know that the worst idea os to do things I don’t understand in 100%, such as copypasting "pro" recommendations from the internet.

                    Most of the time, I keep the OS at its defaults. One of the reasons I use FreeBSD because the defaults, most of the time, are good. Hence it makes sense to publish tests with out of the box configurations. A lot of people will use the systems on defaults.

                    However, sometimes, I have time and interest to do run tests and benchmarks. That’s why I can tell you that what you copied here, and it’s usually the first result on Google for "zfs tuning", is wrong.

                    For starters, ashift isn’t related to HDD vs. SSD but the sector size of the disk.

                    The recordsize=8K for PostgreSQL is one of the worst advice I’ve ever read on the Internet over the decades. I know it’s not your idea because I’ve hardly ever read a ZFS tuning guide that didn’t recommend it, which makes me wonder how many people copy-paste "tuning" without serious testing.

                    I have PostgreSQL databases ranging from 10G to 100G in size, and I couldn’t find a single case when the 8k recordsize didn’t perform worse than the 128k. And if you turn on the compression, even the theoretical basis that argues why the 8k would be better is gone.

                    In general, the smaller the recordsize, the worse ZFS’ performance gets. I tried using various recordsizes under PostgreSQL, exim’s pool, various logs and a millions of image files. In my tests using real data, lowering the recordsize always made things worse regardless of the use case.

                    The most important (and maybe the only) lesson I’ve learned from arbitrary tuning guides was to not copy-paste config lines without either understanding or testing what they meant to do.

                    Comment

                    Working...
                    X