Announcement

Collapse
No announcement yet.

Running ZFS On The Linux 4.1 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by energyman View Post
    Testing zfs on a single disk, is like testing a car after removing all tires.
    Also ZFS is meant to be run on HDDs as storage and SSDs just for cache/log. Even without the cache and log I would expect it to beat BTRFS (the only direct competitor) by a huge margin on HDDs.

    Comment


    • #12
      Originally posted by -MacNuke- View Post

      ZFS default recordsize is 128K as far as I remember. A database uses 8k oder 16k. You have to configure ZFS for the database of your choice. If you just use the default values you wasting performance.

      So this benchmark is pretty useless.

      ZFS is _NOT_ a "fire and forget" filesystem.

      ZFS is a variable block filesystem. The default block size is 128K, but if you're only writing 2k it will only use a 2k block plus some extra for metadata. However, the interesting bit comes when it's writing multiple 2k blocks at a time in a "bursty" fashion. It will put them all in a contiguous 128k block. If you're writing more than 128K it will break it up into, say for example, a 128K block and a 64K block.

      However, as another poster pointed out, ZFS really starts to shine when you start adding VDEVs and a ZIL / L2ARC to the mix, so testing it on a single disk system is great, but not playing to its strengths. A more interesting comparison would be a multi-disk system that is using LVM to bind multiple disks with, let's say, ext4 on top and then the same disks with raw ZFS. ie: don't stick ZFS on top of a LVM volume.

      It would also be nice to see some FreeBSD and/or illumos comparisons. OmniOS is a good illumos choice for a storage test.
      Last edited by rhavenn; 07 July 2015, 08:18 PM.

      Comment


      • #13
        ZFS fanboys have been yelling "ZFS everywhere" for years now. And now all of a sudden it's "except desktop use"! Uh huh. Where are these complaints with Btrfs?

        Comment


        • #14
          Originally posted by xeekei View Post
          ZFS fanboys have been yelling "ZFS everywhere" for years now. And now all of a sudden it's "except desktop use"! Uh huh. Where are these complaints with Btrfs?
          I run FreeBSD with ZFS on all my desktops and laptops. It works just fine and I prefer it's feature set over some thing that I could, perhaps, get a little more speed out of. It takes a bit of tuning so it doesn't eat all your memory, but overall it performs just fine. Volume management for USB sticks, etc... it works just fine with. I have no complaints. "Speed" is really very subjective. A few milliseconds or even seconds here or there really doesn't matter unless you're crunching numbers and most of that stuff will be CPU and/or memory bound anyway.

          In an enterprise environment raw spindle speed / SSD speed will be your bottlenecks pretty quick with a single disk, so you move that off to multiple vdevs in a single pool and make 100, or some random number, requests from 20 different systems to it. How's the performance with ZFS, btrfs and a LVM ext4 setup compare then?

          The point isn't that ZFS can't be used by desktop users. It's more that, just like btrfs, raw speed isn't really its primary goal. Sure, it shouldn't be the slowest horse in the race, but it's not out to win raw speed races. btrfs gets pretty spanked in performance by ext4 and F2FS in pretty much every measurement that phoronix has run in FS comparison tests.

          I run Arch on my gaming box so I can get Steam, but if Steam ran on FreeBSD I'd happily use that in a heart beat.
          Last edited by rhavenn; 07 July 2015, 08:47 PM.

          Comment


          • #15
            Originally posted by xeekei View Post
            ZFS fanboys have been yelling "ZFS everywhere" for years now.
            They have?

            Comment


            • #16
              Originally posted by johnc View Post

              They have?
              Google the phrase. You are bound to get quite a few tweets from people. Also BSD podcasts.

              My point was, when Btrfs loses speed benchmarks people on here rag on Btrfs. But the almighty ZFS who can do no wrong is just being used incorrectly.

              Disclaimer: I agree this benchmark run is not really relevant, but it'd be good to see some consistency here.

              Comment


              • #17
                Originally posted by xeekei View Post
                ZFS fanboys have been yelling "ZFS everywhere" for years now. And now all of a sudden it's "except desktop use"! Uh huh. Where are these complaints with Btrfs?
                ZFS is fine for desktop if you realise that you sacrifice some speed for many other features, like checksums, instant snapshots. Same is true for btrfs, except that last one suffers from ENOSPC issues and quite some other bugs still. Even better if you take some time to enable compression where it fits etc...

                It is just that you loose self-healing capability with a single disk and maybe if speed is more important, you shall still use XFS/EXT4 instead.

                I wish there was native encryption too, but no luck with that yet... unless you use Solaris.

                Comment


                • #18
                  Originally posted by rhavenn View Post


                  ZFS is a variable block filesystem. The default block size is 128K, but if you're only writing 2k it will only use a 2k block plus some extra for metadata. However, the interesting bit comes when it's writing multiple 2k blocks at a time in a "bursty" fashion. It will put them all in a contiguous 128k block. If you're writing more than 128K it will break it up into, say for example, a 128K block and a 64K block.

                  However, as another poster pointed out, ZFS really starts to shine when you start adding VDEVs and a ZIL / L2ARC to the mix, so testing it on a single disk system is great, but not playing to its strengths. A more interesting comparison would be a multi-disk system that is using LVM to bind multiple disks with, let's say, ext4 on top and then the same disks with raw ZFS. ie: don't stick ZFS on top of a LVM volume.

                  It would also be nice to see some FreeBSD and/or illumos comparisons. OmniOS is a good illumos choice for a storage test.

                  You're point is reasonable.
                  While not the case with ext4 (originally, at least;perhaps now its better since tso has been working at google for awhile), xfs was designed to scale very well (and another example of a fs that really needs its parameters chosen carefully). Btrfs, at one point, has issues scaling but I know they fixed that bug.
                  I'd like to see btrfs, zfs, and xfs+lvm+lvm_cache.
                  For really massive scaling something like ceph or gluster should be included as well.

                  Comment


                  • #19
                    Actually i just want to know the value of the logbias and sync paramater. If those are on default value it means that zfs will use automatically zill for sync write IO smaller than 32k.
                    Because you have only 1 ssd and no slog it means that the zil is just created on the pool itself and that each sync write will be write 2 times on the ssd so yeah not the best config here.

                    By the way if you have a databases most of the time oracle best practice are to put the logbias on througput, it just mean that you will bypass the zil mechanism (it also means that you lost the zfs atomic transaction system and so the integrity of the filesystem) but oracle databases have already reado logs.

                    So if you want to have "good" performance, low latency and keep the filesystem integrity you have to put the sync parameters to always and add ans SLOG to your pool it's just mandatory.
                    The SLOG should have a really low latency, good write endurance but can be really small (keep only 5s of write) that's why zeusram are the best for that.

                    Comment


                    • #20
                      Originally posted by oleid View Post
                      Huh? Why the bad database and multi-threading results? I always though ZFS was designed for that purpose.
                      ZFS was designed to hide the high latency and low I/O rate of big and slow HDD storage with RAM/SSD caches (ARC,L2ARC,SLOG).
                      With a benchmark directly on a SSD all these (great) features become totally useless (and ZFS become useless too in term of performance, but reliability features are still here)

                      Comment

                      Working...
                      X