Announcement

Collapse
No announcement yet.

Linux Distributions vs. BSDs With netperf & iperf3 Network Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by aht0 View Post
    You are bringing up extreme samples. FAT32 and HAMMER have as much similarity in features as flying carpet and air plane.
    I'm exaggerating it a bit to show that there is also a pretty large difference between HAMMER and zfs/btrfs. Not as large, but you can't ignore it.

    Single problem (be it some language limitation, not completely thought-out algorithm or bug you can't figure out) you could not see in advance, may force you to revise and rewrite bunch of code from scratch. Or you could have nothing unforeseen interfering and go at it in linear pace. It's how Murphy decrees.
    I'm unsure of what you mean here. I meant that if you plan to support feature X, Y and Z (and they are related as it's all "where to place data blocks on a block device" at the end of the day), the code to implement feature X is going to be much more complex if you must also have Y and Z than if you just had X or just X and Y.

    Indeed? Check pool version changelogs.
    Well, ok, I needed to specify a bit more, it's not "any" feature as that's silly. I meant that they could not simply go and change ZFS to add features that btrfs has already and are stable without some major rewrite.

    For example: defragmentation or shrinking a filesytem. They took ages to add a command to enlarge the pool when it should have been there from day 1. Defrag requires some ugly hacks in the code and none wants to do that.

    Meanwhile btrfs added these features long ago.

    Comment


    • #62
      .....
      Last edited by k1e0x; 05 April 2018, 09:15 PM.

      Comment


      • #63
        Originally posted by starshipeleven View Post
        I'm unsure of what you mean here. I meant that if you plan to support feature X, Y and Z (and they are related as it's all "where to place data blocks on a block device" at the end of the day), the code to implement feature X is going to be much more complex if you must also have Y and Z than if you just had X or just X and Y.
        Ok, agree. Misunderstood you initially.
        Originally posted by starshipeleven View Post
        Well, ok, I needed to specify a bit more, it's not "any" feature as that's silly. I meant that they could not simply go and change ZFS to add features that btrfs has already and are stable without some major rewrite.
        For example: defragmentation or shrinking a filesytem. They took ages to add a command to enlarge the pool when it should have been there from day 1. Defrag requires some ugly hacks in the code and none wants to do that.
        Meanwhile btrfs added these features long ago.
        ZFS "defrag" is doable by rm -rf /folder and restoration of latest snapshot. Or do whatever else which sequantially reads existing data and writes it back again. Another way to lessen it is to increase block size. To get truly fragmented ZFS you'd have to artifically do it. ZFS has mechanisms that try to fight fragmentation.
        Need for defrag might fade away entirely as SSDs (including SAS SSDs) become cheaper. You can already get up to 4TB SAS SSD's as long as you have the money for it.

        Shrinking? How often have you needed it? It' usually running out of space, which becomes a problem.

        Comment


        • #64
          I understand that BTRFS is under heavy development and such... but last time I tried it, it lasted about 6 weeks. Don't know what happened with it.

          XFS was very fast which was nice. Later though, it was moving slower and less consistently. Fragmentation?

          JFS did offer very good performance for me, somewhere around XFS. Also corruption killed it.

          EXT4 worked just fine, noticed that it was slower than XFS, but I never noticed it get significantly slower or faster over usage.

          ReiserFS was nice and fast, but large files did not have the throughput of the others like EXT4 and crawled. IIRC it got terribly fragmented.

          Reiser4 didn't work for me at all.

          Btrfs averaged out to somewhere just under for speed/throughput EXT4. It would have bursts of speed and then hang. It's very possible that there was a bug or I had it set up wrong, but hey, it's my experience with it. Eventually it got corrupted or I upgraded a package to a broken version. Lots of bleeding edge stuff.

          I'm more interested in HAMMER2, OpenZFS, and if anything will come of Tux3.

          Comment


          • #65
            Originally posted by starshipeleven View Post
            Hm, good point. No distro in there was using a preempt-rt kernel by default.
            Debian has them in the repos and it's easy to install them, also Ubuntu. Don't know about the others but I suspect Fedora also does.
            Michael might be interesting for a follow up, in addition to firewall on/off.
            Nah. Josh doesn't like to stray from the defaults, so, that's what we get:|
            The solution for Fedora has long been planetccrma (Stanford mirror that also hosted the latest packages of interest for rt users).
            It's a bit strange considering rh has an rt product (mrg).

            Comment


            • #66
              Originally posted by aht0 View Post
              You are bringing up extreme samples. FAT32 and HAMMER have as much similarity in features as flying carpet and air plane.

              Single problem (be it some language limitation, not completely thought-out algorithm or bug you can't figure out) you could not see in advance, may force you to revise and rewrite bunch of code from scratch. Or you could have nothing unforeseen interfering and go at it in linear pace. It's how Murphy decrees.


              Indeed? Check pool version changelogs. Oracle's for example. Latest pool version is 37 which added LZ4 compression. Pool versions starting from 29 are Oracle's closed source.
              https://docs.oracle.com/cd/E53394_01...801/gjxle.html

              Somehow OpenZFS version for FreeBSD contains the same feature (LZ4 plus support for bunch of other compression algorithms). OpenZFS is compatible to Oracle only up to pool version 28 feature flags so compression is, if taking things logically - independently "added feature". I see quite some more features which exist both in OpenZFS and Oracle's above pool v 28
              Just wanted to chime in that, from what I've read on the btrfs ml regarding this exact subject, adding compression X is rather trivial as far adding features to fs are concerned

              Comment


              • #67
                Pool version 30 added encryption. Still easy?

                Comment


                • #68
                  Originally posted by liam View Post

                  Just wanted to chime in that, from what I've read on the btrfs ml regarding this exact subject, adding compression X is rather trivial as far adding features to fs are concerned
                  Idk why we are talking about this in a thread on FreeBSD networking performance but it seems to be a point of contention for a lot of people. There is a lot of misunderstanding about ZFS in general I find up here. (I am a retired sr-network engineer and have used ZFS since about 2009, I can speak to some of the design of it but I'm not by trade a developer.)

                  Actual big improvements to OpenZFS -post- Oracle have been.
                  Progress info on Send/Receive (apparently this was a very hard problem)
                  Compressed L2ARC.
                  Performance improvements in the write throttler and asynchronous vdev destroy.
                  Snapshot aliasing and bookmarking.
                  Trim support.
                  Lots of improvements to scrub and resilver performance.
                  And bunches of other things.. boot environments etc.

                  The big one is done and that is ZFS Encryption at rest. It's not in actually in current yet though. But this one is incredibly useful because it's a per-dataset layer encryption and it works with ZFS send, and all ZFS's other features. So.. you can use ZFS send to create a off site compressed and encrypted dataset on a untrusted host with a single command. It's delta copy too. More info: https://www.youtube.com/watch?v=frnLiXclAMo

                  So, yes.. lots of stuff is going on in ZFS land without any involvement with Oracle.
                  Last edited by k1e0x; 10 December 2016, 06:20 AM.

                  Comment


                  • #69
                    Originally posted by aht0 View Post
                    Pool version 30 added encryption. Still easy?
                    for Oracle ZFS. -NOT- OpenZFS and Oracle's implementation is broken and they can't import OpenZFS's code into their branch.

                    Comment


                    • #70
                      Originally posted by aht0 View Post
                      ZFS "defrag" is doable by rm -rf /folder and restoration of latest snapshot. Or do whatever else which sequantially reads existing data and writes it back again. Another way to lessen it is to increase block size.
                      Well, that works for most other filesystems too, isn't it? Delete and copy data back. You can't do it live either and that's an issue for a server.

                      If you are deleting stuff and restoring a snapshot from the same CoW filesystem (or reading/writing data to/from the same filesystem) I think you are not defragging a damn as you aren't actually moving anything, just changing (ref)links to same blocks on disk(s). So you need to copy data off that pool and then copy it back. Fun and games.

                      To get truly fragmented ZFS you'd have to artifically do it. ZFS has mechanisms that try to fight fragmentation.
                      I heard about databases and VMs being able to get there as they are large files with many small writes happening all the time, and this is a known problem for CoW filesystems in general.

                      Btrfs can disable the CoW feature (any feature) on a folder so you can keep in there files that have such issues (databases usually have their own checksumming already as they were designed for filesystems lacking checksums so it's not a major issue), I don't think ZFS can disable or enable CoW or other options on per-directory basis.

                      Need for defrag might fade away entirely as SSDs (including SAS SSDs) become cheaper. You can already get up to 4TB SAS SSD's as long as you have the money for it.
                      Lessened yes, fade away no. Heavy fragmentation means you'll have large lists of fragments in the filesystem's metadata (instead of a couple start and end addresses) and that does have a performance impact even with SSD's low seek times. Still talking of databases/VMs and friends. For desktop usage is pretty much impossible to reach that point.

                      This is more obvious on total bs filesystems like NTFS where they also have a upper limit in the "fragment list table" or something like that, so the system's autodefragmentation service can still kick in on a SSD to avoid that (it's very rare to need this on a PC, but for a server it's far more likely to happen)

                      Shrinking? How often have you needed it? It' usually running out of space, which becomes a problem.
                      Yes, that's my point. I'm saying that they ignored features not likely to matter a lot in their usecase. Servers or NAS or other storage devices don't usually need to shrink but grow (can I remind you that growing a ZFS filesystem is still a relatively new addition?).

                      btrfs is aimed at "everything" so yes they can't just make the filesystem in a way that they cannot add the ability to shrink a partition/volume as in a PC or other places it is convenient to be able to shrink a partition.

                      One of the reasons ext4 won over XFS in desktop linux was that ext4 can be shrunk, XFS cannot. (XFS was also aimed at server usage, btw)

                      Pool version 30 added encryption. Still easy?
                      Relatively. Just like compression it's acting under or over the other layers of the filesystem without interacting heavily. That is, blocks get compresed/encrypted then the encrypted blocks are dealt with by the filesystem as normal, or filesystem figures out where to place the blocks, then they get compressed/encrypted and written.

                      I'm doing a massive simplification, it's still not a walk in the park, but it's easier than touching the core of the filesystem code to add things like defragmentation or grow/shrink, or disabling filesystem features in some folder, or even having an arbitrary amount of levels of mirroring or striping (btrfs is designed for this, currently does not allow this as the striping code is still half-baked).

                      Comment

                      Working...
                      X